00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1711 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 2972 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.036 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.036 The recommended git tool is: git 00:00:00.037 using credential 00000000-0000-0000-0000-000000000002 00:00:00.038 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.060 Fetching changes from the remote Git repository 00:00:00.062 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.086 Using shallow fetch with depth 1 00:00:00.086 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.086 > git --version # timeout=10 00:00:00.112 > git --version # 'git version 2.39.2' 00:00:00.112 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.113 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.113 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.275 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.286 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.299 Checking out Revision d55dd09e9e6d4661df5d1073790609767cbcb60c (FETCH_HEAD) 00:00:04.299 > git config core.sparsecheckout # timeout=10 00:00:04.309 > git read-tree -mu HEAD # timeout=10 00:00:04.325 > git checkout -f d55dd09e9e6d4661df5d1073790609767cbcb60c # timeout=5 00:00:04.343 Commit message: "ansible/roles/custom_facts: Add subsystem info to VMDs' nvmes" 00:00:04.343 > git rev-list --no-walk d55dd09e9e6d4661df5d1073790609767cbcb60c # timeout=10 00:00:04.432 [Pipeline] Start of Pipeline 00:00:04.449 [Pipeline] library 00:00:04.450 Loading library shm_lib@master 00:00:04.450 Library shm_lib@master is cached. Copying from home. 00:00:04.466 [Pipeline] node 00:00:04.484 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.486 [Pipeline] { 00:00:04.501 [Pipeline] catchError 00:00:04.502 [Pipeline] { 00:00:04.517 [Pipeline] wrap 00:00:04.528 [Pipeline] { 00:00:04.536 [Pipeline] stage 00:00:04.538 [Pipeline] { (Prologue) 00:00:04.708 [Pipeline] sh 00:00:04.993 + logger -p user.info -t JENKINS-CI 00:00:05.010 [Pipeline] echo 00:00:05.012 Node: CYP12 00:00:05.018 [Pipeline] sh 00:00:05.318 [Pipeline] setCustomBuildProperty 00:00:05.327 [Pipeline] echo 00:00:05.328 Cleanup processes 00:00:05.333 [Pipeline] sh 00:00:05.620 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.620 782936 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.633 [Pipeline] sh 00:00:05.926 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.926 ++ grep -v 'sudo pgrep' 00:00:05.926 ++ awk '{print $1}' 00:00:05.926 + sudo kill -9 00:00:05.926 + true 00:00:05.954 [Pipeline] cleanWs 00:00:05.970 [WS-CLEANUP] Deleting project workspace... 00:00:05.970 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.979 [WS-CLEANUP] done 00:00:05.983 [Pipeline] setCustomBuildProperty 00:00:05.995 [Pipeline] sh 00:00:06.292 + sudo git config --global --replace-all safe.directory '*' 00:00:06.354 [Pipeline] nodesByLabel 00:00:06.356 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.366 [Pipeline] httpRequest 00:00:06.372 HttpMethod: GET 00:00:06.372 URL: http://10.211.164.101/packages/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:06.375 Sending request to url: http://10.211.164.101/packages/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:06.378 Response Code: HTTP/1.1 200 OK 00:00:06.378 Success: Status code 200 is in the accepted range: 200,404 00:00:06.379 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:07.576 [Pipeline] sh 00:00:07.863 + tar --no-same-owner -xf jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:07.883 [Pipeline] httpRequest 00:00:07.888 HttpMethod: GET 00:00:07.888 URL: http://10.211.164.101/packages/spdk_3b33f433344ee82a3d99d10cfd6af5729440114b.tar.gz 00:00:07.889 Sending request to url: http://10.211.164.101/packages/spdk_3b33f433344ee82a3d99d10cfd6af5729440114b.tar.gz 00:00:07.907 Response Code: HTTP/1.1 200 OK 00:00:07.908 Success: Status code 200 is in the accepted range: 200,404 00:00:07.908 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_3b33f433344ee82a3d99d10cfd6af5729440114b.tar.gz 00:01:06.292 [Pipeline] sh 00:01:06.580 + tar --no-same-owner -xf spdk_3b33f433344ee82a3d99d10cfd6af5729440114b.tar.gz 00:01:09.897 [Pipeline] sh 00:01:10.184 + git -C spdk log --oneline -n5 00:01:10.184 3b33f4333 test/nvme/cuse: Fix typo 00:01:10.184 bf784f7a1 test/nvme: Set SEL only when the field is supported 00:01:10.184 a5153247d autopackage: Slurp spdk-ld-path while building against native DPDK 00:01:10.184 b14fb7292 autopackage: Cut number of make jobs in half under clang+LTO 00:01:10.184 1d70a0c9e configure: Hint compiler at what linker to use via -fuse-ld 00:01:10.198 [Pipeline] } 00:01:10.215 [Pipeline] // stage 00:01:10.224 [Pipeline] stage 00:01:10.226 [Pipeline] { (Prepare) 00:01:10.243 [Pipeline] writeFile 00:01:10.259 [Pipeline] sh 00:01:10.546 + logger -p user.info -t JENKINS-CI 00:01:10.558 [Pipeline] sh 00:01:10.841 + logger -p user.info -t JENKINS-CI 00:01:10.855 [Pipeline] sh 00:01:11.140 + cat autorun-spdk.conf 00:01:11.140 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.140 SPDK_TEST_NVMF=1 00:01:11.140 SPDK_TEST_NVME_CLI=1 00:01:11.140 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.140 SPDK_TEST_NVMF_NICS=e810 00:01:11.140 SPDK_RUN_UBSAN=1 00:01:11.140 NET_TYPE=phy 00:01:11.149 RUN_NIGHTLY=1 00:01:11.153 [Pipeline] readFile 00:01:11.176 [Pipeline] withEnv 00:01:11.178 [Pipeline] { 00:01:11.191 [Pipeline] sh 00:01:11.477 + set -ex 00:01:11.477 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:11.477 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:11.477 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.477 ++ SPDK_TEST_NVMF=1 00:01:11.477 ++ SPDK_TEST_NVME_CLI=1 00:01:11.477 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.477 ++ SPDK_TEST_NVMF_NICS=e810 00:01:11.477 ++ SPDK_RUN_UBSAN=1 00:01:11.477 ++ NET_TYPE=phy 00:01:11.477 ++ RUN_NIGHTLY=1 00:01:11.477 + case $SPDK_TEST_NVMF_NICS in 00:01:11.477 + DRIVERS=ice 00:01:11.477 + [[ tcp == \r\d\m\a ]] 00:01:11.477 + [[ -n ice ]] 00:01:11.477 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:11.477 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:19.661 rmmod: ERROR: Module irdma is not currently loaded 00:01:19.661 rmmod: ERROR: Module i40iw is not currently loaded 00:01:19.661 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:19.661 + true 00:01:19.661 + for D in $DRIVERS 00:01:19.661 + sudo modprobe ice 00:01:19.923 + exit 0 00:01:19.934 [Pipeline] } 00:01:19.952 [Pipeline] // withEnv 00:01:19.957 [Pipeline] } 00:01:19.972 [Pipeline] // stage 00:01:19.981 [Pipeline] catchError 00:01:19.983 [Pipeline] { 00:01:19.998 [Pipeline] timeout 00:01:19.998 Timeout set to expire in 40 min 00:01:20.000 [Pipeline] { 00:01:20.016 [Pipeline] stage 00:01:20.017 [Pipeline] { (Tests) 00:01:20.029 [Pipeline] sh 00:01:20.313 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.313 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.313 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.313 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:20.313 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.313 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.313 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:20.313 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.313 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.313 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.313 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.313 + source /etc/os-release 00:01:20.313 ++ NAME='Fedora Linux' 00:01:20.313 ++ VERSION='38 (Cloud Edition)' 00:01:20.313 ++ ID=fedora 00:01:20.313 ++ VERSION_ID=38 00:01:20.313 ++ VERSION_CODENAME= 00:01:20.313 ++ PLATFORM_ID=platform:f38 00:01:20.313 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:20.313 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.313 ++ LOGO=fedora-logo-icon 00:01:20.313 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:20.313 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.313 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:20.313 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.313 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.313 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.313 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:20.313 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.313 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:20.313 ++ SUPPORT_END=2024-05-14 00:01:20.313 ++ VARIANT='Cloud Edition' 00:01:20.313 ++ VARIANT_ID=cloud 00:01:20.313 + uname -a 00:01:20.313 Linux spdk-CYP-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:20.313 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:23.618 Hugepages 00:01:23.618 node hugesize free / total 00:01:23.618 node0 1048576kB 0 / 0 00:01:23.618 node0 2048kB 0 / 0 00:01:23.618 node1 1048576kB 0 / 0 00:01:23.618 node1 2048kB 0 / 0 00:01:23.618 00:01:23.618 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:23.618 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:23.618 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:23.618 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:23.618 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:23.618 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:23.618 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:23.618 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:23.618 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:23.618 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:23.618 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:23.618 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:23.618 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:23.618 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:23.618 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:23.618 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:23.618 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:23.618 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:23.618 + rm -f /tmp/spdk-ld-path 00:01:23.618 + source autorun-spdk.conf 00:01:23.618 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.618 ++ SPDK_TEST_NVMF=1 00:01:23.618 ++ SPDK_TEST_NVME_CLI=1 00:01:23.618 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.618 ++ SPDK_TEST_NVMF_NICS=e810 00:01:23.618 ++ SPDK_RUN_UBSAN=1 00:01:23.618 ++ NET_TYPE=phy 00:01:23.618 ++ RUN_NIGHTLY=1 00:01:23.618 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:23.618 + [[ -n '' ]] 00:01:23.618 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.618 + for M in /var/spdk/build-*-manifest.txt 00:01:23.618 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:23.618 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.618 + for M in /var/spdk/build-*-manifest.txt 00:01:23.618 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:23.618 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.618 ++ uname 00:01:23.618 + [[ Linux == \L\i\n\u\x ]] 00:01:23.618 + sudo dmesg -T 00:01:23.618 + sudo dmesg --clear 00:01:23.880 + dmesg_pid=784596 00:01:23.880 + [[ Fedora Linux == FreeBSD ]] 00:01:23.880 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.880 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.880 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:23.880 + sudo dmesg -Tw 00:01:23.880 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:23.880 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:23.880 + [[ -x /usr/src/fio-static/fio ]] 00:01:23.880 + export FIO_BIN=/usr/src/fio-static/fio 00:01:23.880 + FIO_BIN=/usr/src/fio-static/fio 00:01:23.880 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:23.880 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:23.880 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:23.880 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.880 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.880 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:23.880 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.880 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.880 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:23.880 Test configuration: 00:01:23.880 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.880 SPDK_TEST_NVMF=1 00:01:23.880 SPDK_TEST_NVME_CLI=1 00:01:23.880 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.880 SPDK_TEST_NVMF_NICS=e810 00:01:23.880 SPDK_RUN_UBSAN=1 00:01:23.880 NET_TYPE=phy 00:01:23.880 RUN_NIGHTLY=1 22:28:08 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:23.880 22:28:08 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:23.880 22:28:08 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:23.880 22:28:08 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:23.880 22:28:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.880 22:28:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.880 22:28:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.880 22:28:08 -- paths/export.sh@5 -- $ export PATH 00:01:23.880 22:28:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.880 22:28:08 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:23.880 22:28:08 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:23.880 22:28:08 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713212888.XXXXXX 00:01:23.880 22:28:08 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713212888.Uan11f 00:01:23.880 22:28:08 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:23.880 22:28:08 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:23.880 22:28:08 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:23.880 22:28:08 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:23.880 22:28:08 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:23.880 22:28:08 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:23.880 22:28:08 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:23.880 22:28:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.880 22:28:08 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:23.880 22:28:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:23.880 22:28:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:23.880 22:28:08 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.880 22:28:08 -- spdk/autobuild.sh@16 -- $ date -u 00:01:23.880 Mon Apr 15 08:28:08 PM UTC 2024 00:01:23.880 22:28:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:23.880 LTS-20-g3b33f4333 00:01:23.880 22:28:08 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:23.880 22:28:08 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:23.880 22:28:08 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:23.880 22:28:08 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:23.880 22:28:08 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:23.880 22:28:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.880 ************************************ 00:01:23.880 START TEST ubsan 00:01:23.880 ************************************ 00:01:23.880 22:28:08 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:23.880 using ubsan 00:01:23.880 00:01:23.880 real 0m0.000s 00:01:23.880 user 0m0.000s 00:01:23.880 sys 0m0.000s 00:01:23.880 22:28:08 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:23.880 22:28:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.880 ************************************ 00:01:23.880 END TEST ubsan 00:01:23.880 ************************************ 00:01:23.880 22:28:08 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:23.880 22:28:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:23.880 22:28:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:23.880 22:28:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:23.881 22:28:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:23.881 22:28:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:23.881 22:28:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:23.881 22:28:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:23.881 22:28:08 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:24.142 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:24.142 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:24.403 Using 'verbs' RDMA provider 00:01:39.889 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:52.133 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:52.133 Creating mk/config.mk...done. 00:01:52.133 Creating mk/cc.flags.mk...done. 00:01:52.133 Type 'make' to build. 00:01:52.133 22:28:35 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:52.133 22:28:35 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:52.133 22:28:35 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:52.133 22:28:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.133 ************************************ 00:01:52.133 START TEST make 00:01:52.133 ************************************ 00:01:52.133 22:28:35 -- common/autotest_common.sh@1104 -- $ make -j144 00:01:52.133 make[1]: Nothing to be done for 'all'. 00:02:00.268 The Meson build system 00:02:00.268 Version: 1.3.1 00:02:00.268 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:00.268 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:00.268 Build type: native build 00:02:00.268 Program cat found: YES (/usr/bin/cat) 00:02:00.268 Project name: DPDK 00:02:00.268 Project version: 23.11.0 00:02:00.268 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:00.268 C linker for the host machine: cc ld.bfd 2.39-16 00:02:00.268 Host machine cpu family: x86_64 00:02:00.268 Host machine cpu: x86_64 00:02:00.268 Message: ## Building in Developer Mode ## 00:02:00.268 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:00.268 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:00.268 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:00.268 Program python3 found: YES (/usr/bin/python3) 00:02:00.268 Program cat found: YES (/usr/bin/cat) 00:02:00.268 Compiler for C supports arguments -march=native: YES 00:02:00.268 Checking for size of "void *" : 8 00:02:00.268 Checking for size of "void *" : 8 (cached) 00:02:00.268 Library m found: YES 00:02:00.268 Library numa found: YES 00:02:00.268 Has header "numaif.h" : YES 00:02:00.268 Library fdt found: NO 00:02:00.268 Library execinfo found: NO 00:02:00.268 Has header "execinfo.h" : YES 00:02:00.268 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:00.268 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:00.268 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:00.268 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:00.268 Run-time dependency openssl found: YES 3.0.9 00:02:00.268 Run-time dependency libpcap found: YES 1.10.4 00:02:00.268 Has header "pcap.h" with dependency libpcap: YES 00:02:00.268 Compiler for C supports arguments -Wcast-qual: YES 00:02:00.268 Compiler for C supports arguments -Wdeprecated: YES 00:02:00.268 Compiler for C supports arguments -Wformat: YES 00:02:00.268 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:00.268 Compiler for C supports arguments -Wformat-security: NO 00:02:00.268 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:00.268 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:00.268 Compiler for C supports arguments -Wnested-externs: YES 00:02:00.268 Compiler for C supports arguments -Wold-style-definition: YES 00:02:00.268 Compiler for C supports arguments -Wpointer-arith: YES 00:02:00.268 Compiler for C supports arguments -Wsign-compare: YES 00:02:00.268 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:00.268 Compiler for C supports arguments -Wundef: YES 00:02:00.268 Compiler for C supports arguments -Wwrite-strings: YES 00:02:00.268 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:00.268 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:00.268 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:00.268 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:00.268 Program objdump found: YES (/usr/bin/objdump) 00:02:00.268 Compiler for C supports arguments -mavx512f: YES 00:02:00.268 Checking if "AVX512 checking" compiles: YES 00:02:00.268 Fetching value of define "__SSE4_2__" : 1 00:02:00.268 Fetching value of define "__AES__" : 1 00:02:00.268 Fetching value of define "__AVX__" : 1 00:02:00.268 Fetching value of define "__AVX2__" : 1 00:02:00.268 Fetching value of define "__AVX512BW__" : 1 00:02:00.268 Fetching value of define "__AVX512CD__" : 1 00:02:00.268 Fetching value of define "__AVX512DQ__" : 1 00:02:00.268 Fetching value of define "__AVX512F__" : 1 00:02:00.268 Fetching value of define "__AVX512VL__" : 1 00:02:00.268 Fetching value of define "__PCLMUL__" : 1 00:02:00.268 Fetching value of define "__RDRND__" : 1 00:02:00.268 Fetching value of define "__RDSEED__" : 1 00:02:00.268 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:00.268 Fetching value of define "__znver1__" : (undefined) 00:02:00.268 Fetching value of define "__znver2__" : (undefined) 00:02:00.268 Fetching value of define "__znver3__" : (undefined) 00:02:00.268 Fetching value of define "__znver4__" : (undefined) 00:02:00.268 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:00.268 Message: lib/log: Defining dependency "log" 00:02:00.268 Message: lib/kvargs: Defining dependency "kvargs" 00:02:00.268 Message: lib/telemetry: Defining dependency "telemetry" 00:02:00.268 Checking for function "getentropy" : NO 00:02:00.268 Message: lib/eal: Defining dependency "eal" 00:02:00.268 Message: lib/ring: Defining dependency "ring" 00:02:00.268 Message: lib/rcu: Defining dependency "rcu" 00:02:00.268 Message: lib/mempool: Defining dependency "mempool" 00:02:00.268 Message: lib/mbuf: Defining dependency "mbuf" 00:02:00.268 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:00.268 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:00.268 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:00.268 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:00.268 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:00.268 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:00.268 Compiler for C supports arguments -mpclmul: YES 00:02:00.268 Compiler for C supports arguments -maes: YES 00:02:00.268 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:00.268 Compiler for C supports arguments -mavx512bw: YES 00:02:00.268 Compiler for C supports arguments -mavx512dq: YES 00:02:00.268 Compiler for C supports arguments -mavx512vl: YES 00:02:00.268 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:00.268 Compiler for C supports arguments -mavx2: YES 00:02:00.268 Compiler for C supports arguments -mavx: YES 00:02:00.268 Message: lib/net: Defining dependency "net" 00:02:00.268 Message: lib/meter: Defining dependency "meter" 00:02:00.268 Message: lib/ethdev: Defining dependency "ethdev" 00:02:00.269 Message: lib/pci: Defining dependency "pci" 00:02:00.269 Message: lib/cmdline: Defining dependency "cmdline" 00:02:00.269 Message: lib/hash: Defining dependency "hash" 00:02:00.269 Message: lib/timer: Defining dependency "timer" 00:02:00.269 Message: lib/compressdev: Defining dependency "compressdev" 00:02:00.269 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:00.269 Message: lib/dmadev: Defining dependency "dmadev" 00:02:00.269 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:00.269 Message: lib/power: Defining dependency "power" 00:02:00.269 Message: lib/reorder: Defining dependency "reorder" 00:02:00.269 Message: lib/security: Defining dependency "security" 00:02:00.269 Has header "linux/userfaultfd.h" : YES 00:02:00.269 Has header "linux/vduse.h" : YES 00:02:00.269 Message: lib/vhost: Defining dependency "vhost" 00:02:00.269 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:00.269 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:00.269 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:00.269 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:00.269 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:00.269 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:00.269 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:00.269 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:00.269 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:00.269 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:00.269 Program doxygen found: YES (/usr/bin/doxygen) 00:02:00.269 Configuring doxy-api-html.conf using configuration 00:02:00.269 Configuring doxy-api-man.conf using configuration 00:02:00.269 Program mandb found: YES (/usr/bin/mandb) 00:02:00.269 Program sphinx-build found: NO 00:02:00.269 Configuring rte_build_config.h using configuration 00:02:00.269 Message: 00:02:00.269 ================= 00:02:00.269 Applications Enabled 00:02:00.269 ================= 00:02:00.269 00:02:00.269 apps: 00:02:00.269 00:02:00.269 00:02:00.269 Message: 00:02:00.269 ================= 00:02:00.269 Libraries Enabled 00:02:00.269 ================= 00:02:00.269 00:02:00.269 libs: 00:02:00.269 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:00.269 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:00.269 cryptodev, dmadev, power, reorder, security, vhost, 00:02:00.269 00:02:00.269 Message: 00:02:00.269 =============== 00:02:00.269 Drivers Enabled 00:02:00.269 =============== 00:02:00.269 00:02:00.269 common: 00:02:00.269 00:02:00.269 bus: 00:02:00.269 pci, vdev, 00:02:00.269 mempool: 00:02:00.269 ring, 00:02:00.269 dma: 00:02:00.269 00:02:00.269 net: 00:02:00.269 00:02:00.269 crypto: 00:02:00.269 00:02:00.269 compress: 00:02:00.269 00:02:00.269 vdpa: 00:02:00.269 00:02:00.269 00:02:00.269 Message: 00:02:00.269 ================= 00:02:00.269 Content Skipped 00:02:00.269 ================= 00:02:00.269 00:02:00.269 apps: 00:02:00.269 dumpcap: explicitly disabled via build config 00:02:00.269 graph: explicitly disabled via build config 00:02:00.269 pdump: explicitly disabled via build config 00:02:00.269 proc-info: explicitly disabled via build config 00:02:00.269 test-acl: explicitly disabled via build config 00:02:00.269 test-bbdev: explicitly disabled via build config 00:02:00.269 test-cmdline: explicitly disabled via build config 00:02:00.269 test-compress-perf: explicitly disabled via build config 00:02:00.269 test-crypto-perf: explicitly disabled via build config 00:02:00.269 test-dma-perf: explicitly disabled via build config 00:02:00.269 test-eventdev: explicitly disabled via build config 00:02:00.269 test-fib: explicitly disabled via build config 00:02:00.269 test-flow-perf: explicitly disabled via build config 00:02:00.269 test-gpudev: explicitly disabled via build config 00:02:00.269 test-mldev: explicitly disabled via build config 00:02:00.269 test-pipeline: explicitly disabled via build config 00:02:00.269 test-pmd: explicitly disabled via build config 00:02:00.269 test-regex: explicitly disabled via build config 00:02:00.269 test-sad: explicitly disabled via build config 00:02:00.269 test-security-perf: explicitly disabled via build config 00:02:00.269 00:02:00.269 libs: 00:02:00.269 metrics: explicitly disabled via build config 00:02:00.269 acl: explicitly disabled via build config 00:02:00.269 bbdev: explicitly disabled via build config 00:02:00.269 bitratestats: explicitly disabled via build config 00:02:00.269 bpf: explicitly disabled via build config 00:02:00.269 cfgfile: explicitly disabled via build config 00:02:00.269 distributor: explicitly disabled via build config 00:02:00.269 efd: explicitly disabled via build config 00:02:00.269 eventdev: explicitly disabled via build config 00:02:00.269 dispatcher: explicitly disabled via build config 00:02:00.269 gpudev: explicitly disabled via build config 00:02:00.269 gro: explicitly disabled via build config 00:02:00.269 gso: explicitly disabled via build config 00:02:00.269 ip_frag: explicitly disabled via build config 00:02:00.269 jobstats: explicitly disabled via build config 00:02:00.269 latencystats: explicitly disabled via build config 00:02:00.269 lpm: explicitly disabled via build config 00:02:00.269 member: explicitly disabled via build config 00:02:00.269 pcapng: explicitly disabled via build config 00:02:00.269 rawdev: explicitly disabled via build config 00:02:00.269 regexdev: explicitly disabled via build config 00:02:00.269 mldev: explicitly disabled via build config 00:02:00.269 rib: explicitly disabled via build config 00:02:00.269 sched: explicitly disabled via build config 00:02:00.269 stack: explicitly disabled via build config 00:02:00.269 ipsec: explicitly disabled via build config 00:02:00.269 pdcp: explicitly disabled via build config 00:02:00.269 fib: explicitly disabled via build config 00:02:00.269 port: explicitly disabled via build config 00:02:00.269 pdump: explicitly disabled via build config 00:02:00.269 table: explicitly disabled via build config 00:02:00.269 pipeline: explicitly disabled via build config 00:02:00.269 graph: explicitly disabled via build config 00:02:00.269 node: explicitly disabled via build config 00:02:00.269 00:02:00.269 drivers: 00:02:00.269 common/cpt: not in enabled drivers build config 00:02:00.269 common/dpaax: not in enabled drivers build config 00:02:00.269 common/iavf: not in enabled drivers build config 00:02:00.269 common/idpf: not in enabled drivers build config 00:02:00.269 common/mvep: not in enabled drivers build config 00:02:00.269 common/octeontx: not in enabled drivers build config 00:02:00.269 bus/auxiliary: not in enabled drivers build config 00:02:00.269 bus/cdx: not in enabled drivers build config 00:02:00.269 bus/dpaa: not in enabled drivers build config 00:02:00.269 bus/fslmc: not in enabled drivers build config 00:02:00.269 bus/ifpga: not in enabled drivers build config 00:02:00.269 bus/platform: not in enabled drivers build config 00:02:00.269 bus/vmbus: not in enabled drivers build config 00:02:00.269 common/cnxk: not in enabled drivers build config 00:02:00.269 common/mlx5: not in enabled drivers build config 00:02:00.269 common/nfp: not in enabled drivers build config 00:02:00.269 common/qat: not in enabled drivers build config 00:02:00.269 common/sfc_efx: not in enabled drivers build config 00:02:00.269 mempool/bucket: not in enabled drivers build config 00:02:00.269 mempool/cnxk: not in enabled drivers build config 00:02:00.269 mempool/dpaa: not in enabled drivers build config 00:02:00.269 mempool/dpaa2: not in enabled drivers build config 00:02:00.269 mempool/octeontx: not in enabled drivers build config 00:02:00.269 mempool/stack: not in enabled drivers build config 00:02:00.269 dma/cnxk: not in enabled drivers build config 00:02:00.269 dma/dpaa: not in enabled drivers build config 00:02:00.269 dma/dpaa2: not in enabled drivers build config 00:02:00.269 dma/hisilicon: not in enabled drivers build config 00:02:00.269 dma/idxd: not in enabled drivers build config 00:02:00.269 dma/ioat: not in enabled drivers build config 00:02:00.269 dma/skeleton: not in enabled drivers build config 00:02:00.269 net/af_packet: not in enabled drivers build config 00:02:00.269 net/af_xdp: not in enabled drivers build config 00:02:00.269 net/ark: not in enabled drivers build config 00:02:00.269 net/atlantic: not in enabled drivers build config 00:02:00.269 net/avp: not in enabled drivers build config 00:02:00.269 net/axgbe: not in enabled drivers build config 00:02:00.269 net/bnx2x: not in enabled drivers build config 00:02:00.269 net/bnxt: not in enabled drivers build config 00:02:00.269 net/bonding: not in enabled drivers build config 00:02:00.269 net/cnxk: not in enabled drivers build config 00:02:00.269 net/cpfl: not in enabled drivers build config 00:02:00.269 net/cxgbe: not in enabled drivers build config 00:02:00.269 net/dpaa: not in enabled drivers build config 00:02:00.269 net/dpaa2: not in enabled drivers build config 00:02:00.269 net/e1000: not in enabled drivers build config 00:02:00.269 net/ena: not in enabled drivers build config 00:02:00.269 net/enetc: not in enabled drivers build config 00:02:00.269 net/enetfec: not in enabled drivers build config 00:02:00.269 net/enic: not in enabled drivers build config 00:02:00.269 net/failsafe: not in enabled drivers build config 00:02:00.269 net/fm10k: not in enabled drivers build config 00:02:00.269 net/gve: not in enabled drivers build config 00:02:00.269 net/hinic: not in enabled drivers build config 00:02:00.269 net/hns3: not in enabled drivers build config 00:02:00.269 net/i40e: not in enabled drivers build config 00:02:00.269 net/iavf: not in enabled drivers build config 00:02:00.269 net/ice: not in enabled drivers build config 00:02:00.269 net/idpf: not in enabled drivers build config 00:02:00.269 net/igc: not in enabled drivers build config 00:02:00.269 net/ionic: not in enabled drivers build config 00:02:00.269 net/ipn3ke: not in enabled drivers build config 00:02:00.269 net/ixgbe: not in enabled drivers build config 00:02:00.269 net/mana: not in enabled drivers build config 00:02:00.269 net/memif: not in enabled drivers build config 00:02:00.269 net/mlx4: not in enabled drivers build config 00:02:00.269 net/mlx5: not in enabled drivers build config 00:02:00.269 net/mvneta: not in enabled drivers build config 00:02:00.269 net/mvpp2: not in enabled drivers build config 00:02:00.269 net/netvsc: not in enabled drivers build config 00:02:00.269 net/nfb: not in enabled drivers build config 00:02:00.269 net/nfp: not in enabled drivers build config 00:02:00.269 net/ngbe: not in enabled drivers build config 00:02:00.269 net/null: not in enabled drivers build config 00:02:00.269 net/octeontx: not in enabled drivers build config 00:02:00.269 net/octeon_ep: not in enabled drivers build config 00:02:00.269 net/pcap: not in enabled drivers build config 00:02:00.269 net/pfe: not in enabled drivers build config 00:02:00.269 net/qede: not in enabled drivers build config 00:02:00.269 net/ring: not in enabled drivers build config 00:02:00.269 net/sfc: not in enabled drivers build config 00:02:00.269 net/softnic: not in enabled drivers build config 00:02:00.269 net/tap: not in enabled drivers build config 00:02:00.269 net/thunderx: not in enabled drivers build config 00:02:00.270 net/txgbe: not in enabled drivers build config 00:02:00.270 net/vdev_netvsc: not in enabled drivers build config 00:02:00.270 net/vhost: not in enabled drivers build config 00:02:00.270 net/virtio: not in enabled drivers build config 00:02:00.270 net/vmxnet3: not in enabled drivers build config 00:02:00.270 raw/*: missing internal dependency, "rawdev" 00:02:00.270 crypto/armv8: not in enabled drivers build config 00:02:00.270 crypto/bcmfs: not in enabled drivers build config 00:02:00.270 crypto/caam_jr: not in enabled drivers build config 00:02:00.270 crypto/ccp: not in enabled drivers build config 00:02:00.270 crypto/cnxk: not in enabled drivers build config 00:02:00.270 crypto/dpaa_sec: not in enabled drivers build config 00:02:00.270 crypto/dpaa2_sec: not in enabled drivers build config 00:02:00.270 crypto/ipsec_mb: not in enabled drivers build config 00:02:00.270 crypto/mlx5: not in enabled drivers build config 00:02:00.270 crypto/mvsam: not in enabled drivers build config 00:02:00.270 crypto/nitrox: not in enabled drivers build config 00:02:00.270 crypto/null: not in enabled drivers build config 00:02:00.270 crypto/octeontx: not in enabled drivers build config 00:02:00.270 crypto/openssl: not in enabled drivers build config 00:02:00.270 crypto/scheduler: not in enabled drivers build config 00:02:00.270 crypto/uadk: not in enabled drivers build config 00:02:00.270 crypto/virtio: not in enabled drivers build config 00:02:00.270 compress/isal: not in enabled drivers build config 00:02:00.270 compress/mlx5: not in enabled drivers build config 00:02:00.270 compress/octeontx: not in enabled drivers build config 00:02:00.270 compress/zlib: not in enabled drivers build config 00:02:00.270 regex/*: missing internal dependency, "regexdev" 00:02:00.270 ml/*: missing internal dependency, "mldev" 00:02:00.270 vdpa/ifc: not in enabled drivers build config 00:02:00.270 vdpa/mlx5: not in enabled drivers build config 00:02:00.270 vdpa/nfp: not in enabled drivers build config 00:02:00.270 vdpa/sfc: not in enabled drivers build config 00:02:00.270 event/*: missing internal dependency, "eventdev" 00:02:00.270 baseband/*: missing internal dependency, "bbdev" 00:02:00.270 gpu/*: missing internal dependency, "gpudev" 00:02:00.270 00:02:00.270 00:02:00.270 Build targets in project: 84 00:02:00.270 00:02:00.270 DPDK 23.11.0 00:02:00.270 00:02:00.270 User defined options 00:02:00.270 buildtype : debug 00:02:00.270 default_library : shared 00:02:00.270 libdir : lib 00:02:00.270 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:00.270 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:00.270 c_link_args : 00:02:00.270 cpu_instruction_set: native 00:02:00.270 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:02:00.270 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:02:00.270 enable_docs : false 00:02:00.270 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:00.270 enable_kmods : false 00:02:00.270 tests : false 00:02:00.270 00:02:00.270 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.270 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:00.270 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:00.270 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:00.270 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:00.270 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:00.270 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:00.270 [6/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:00.270 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:00.270 [8/264] Linking static target lib/librte_kvargs.a 00:02:00.270 [9/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:00.270 [10/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:00.270 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:00.270 [12/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:00.270 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:00.270 [14/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:00.270 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:00.270 [16/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:00.270 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:00.531 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:00.531 [19/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:00.531 [20/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:00.531 [21/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:00.531 [22/264] Linking static target lib/librte_log.a 00:02:00.531 [23/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:00.531 [24/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:00.531 [25/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:00.531 [26/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:00.531 [27/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:00.531 [28/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:00.531 [29/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:00.531 [30/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:00.531 [31/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:00.531 [32/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:00.531 [33/264] Linking static target lib/librte_pci.a 00:02:00.531 [34/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:00.531 [35/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:00.531 [36/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:00.531 [37/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:00.531 [38/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:00.531 [39/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:00.531 [40/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:00.531 [41/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:00.531 [42/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:00.531 [43/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:00.791 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:00.791 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.791 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:00.791 [47/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:00.791 [48/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.791 [49/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:00.791 [50/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:00.791 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:00.791 [52/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.791 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:00.791 [54/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:00.791 [55/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:00.791 [56/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:00.791 [57/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:00.791 [58/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:00.791 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:00.791 [60/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:00.791 [61/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:00.791 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:00.791 [63/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:00.791 [64/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.791 [65/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:00.791 [66/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:00.791 [67/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:00.791 [68/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:00.791 [69/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:00.791 [70/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:00.791 [71/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:00.791 [72/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:00.791 [73/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:00.791 [74/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:00.791 [75/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:00.791 [76/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:00.791 [77/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:00.791 [78/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:00.791 [79/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:00.791 [80/264] Linking static target lib/librte_telemetry.a 00:02:00.791 [81/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:00.791 [82/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:00.791 [83/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:00.791 [84/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:00.791 [85/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:00.791 [86/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:00.791 [87/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:00.791 [88/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:00.791 [89/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:00.791 [90/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:00.791 [91/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:00.791 [92/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:00.791 [93/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:00.791 [94/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:00.791 [95/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:00.791 [96/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:00.791 [97/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:00.791 [98/264] Linking static target lib/librte_meter.a 00:02:00.791 [99/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:00.791 [100/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:00.791 [101/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:00.791 [102/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:01.052 [103/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:01.052 [104/264] Linking static target lib/librte_timer.a 00:02:01.052 [105/264] Linking static target lib/librte_ring.a 00:02:01.052 [106/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:01.052 [107/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:01.052 [108/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:01.052 [109/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:01.052 [110/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:01.052 [111/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:01.052 [112/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:01.052 [113/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:01.052 [114/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:01.052 [115/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:01.052 [116/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:01.052 [117/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:01.052 [118/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:01.052 [119/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:01.052 [120/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:01.052 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:01.052 [122/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:01.052 [123/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:01.052 [124/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:01.052 [125/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:01.052 [126/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:01.052 [127/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:01.052 [128/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:01.052 [129/264] Linking static target lib/librte_cmdline.a 00:02:01.052 [130/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:01.052 [131/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:01.052 [132/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:01.052 [133/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:01.052 [134/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:01.052 [135/264] Linking static target lib/librte_reorder.a 00:02:01.052 [136/264] Linking static target lib/librte_compressdev.a 00:02:01.052 [137/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:01.052 [138/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:01.052 [139/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:01.052 [140/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:01.052 [141/264] Linking static target lib/librte_rcu.a 00:02:01.052 [142/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:01.052 [143/264] Linking static target lib/librte_net.a 00:02:01.052 [144/264] Linking static target lib/librte_security.a 00:02:01.052 [145/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.052 [146/264] Linking static target lib/librte_power.a 00:02:01.052 [147/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:01.052 [148/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:01.052 [149/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:01.052 [150/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:01.052 [151/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:01.052 [152/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:01.052 [153/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:01.052 [154/264] Linking static target lib/librte_mempool.a 00:02:01.052 [155/264] Linking static target lib/librte_dmadev.a 00:02:01.052 [156/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:01.052 [157/264] Linking target lib/librte_log.so.24.0 00:02:01.052 [158/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:01.052 [159/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:01.052 [160/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:01.052 [161/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:01.052 [162/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:01.052 [163/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:01.052 [164/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:01.052 [165/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:01.052 [166/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:01.052 [167/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:01.052 [168/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:01.052 [169/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:01.052 [170/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:01.052 [171/264] Linking static target lib/librte_eal.a 00:02:01.052 [172/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:01.052 [173/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:01.052 [174/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:01.052 [175/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:01.052 [176/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:01.052 [177/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.311 [178/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:01.311 [179/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:01.311 [180/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:01.311 [181/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:01.311 [182/264] Linking static target lib/librte_mbuf.a 00:02:01.311 [183/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:01.311 [184/264] Linking static target drivers/librte_bus_vdev.a 00:02:01.311 [185/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:01.311 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:01.311 [187/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:01.311 [188/264] Linking target lib/librte_kvargs.so.24.0 00:02:01.311 [189/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.311 [190/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:01.311 [191/264] Linking static target lib/librte_hash.a 00:02:01.311 [192/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:01.311 [193/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:01.311 [194/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:01.311 [195/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:01.311 [196/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.311 [197/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.311 [198/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.311 [199/264] Linking static target drivers/librte_bus_pci.a 00:02:01.311 [200/264] Linking static target drivers/librte_mempool_ring.a 00:02:01.311 [201/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.312 [202/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:01.312 [203/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.312 [204/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.312 [205/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:01.572 [206/264] Linking static target lib/librte_cryptodev.a 00:02:01.572 [207/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.572 [208/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:01.572 [209/264] Linking target lib/librte_telemetry.so.24.0 00:02:01.572 [210/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.572 [211/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.572 [212/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:01.572 [213/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.572 [214/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:01.834 [215/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.834 [216/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:01.834 [217/264] Linking static target lib/librte_ethdev.a 00:02:01.834 [218/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.834 [219/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.094 [220/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.094 [221/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.094 [222/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.358 [223/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.990 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:02.990 [225/264] Linking static target lib/librte_vhost.a 00:02:03.563 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.951 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.542 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.484 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.484 [230/264] Linking target lib/librte_eal.so.24.0 00:02:12.484 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:12.745 [232/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:12.745 [233/264] Linking target lib/librte_ring.so.24.0 00:02:12.745 [234/264] Linking target lib/librte_meter.so.24.0 00:02:12.745 [235/264] Linking target lib/librte_dmadev.so.24.0 00:02:12.745 [236/264] Linking target lib/librte_pci.so.24.0 00:02:12.745 [237/264] Linking target lib/librte_timer.so.24.0 00:02:12.745 [238/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:12.745 [239/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:12.745 [240/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:12.745 [241/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:12.745 [242/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:12.745 [243/264] Linking target lib/librte_rcu.so.24.0 00:02:12.745 [244/264] Linking target lib/librte_mempool.so.24.0 00:02:12.745 [245/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:13.006 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:13.006 [247/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:13.006 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:13.006 [249/264] Linking target lib/librte_mbuf.so.24.0 00:02:13.267 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:13.267 [251/264] Linking target lib/librte_cryptodev.so.24.0 00:02:13.267 [252/264] Linking target lib/librte_net.so.24.0 00:02:13.267 [253/264] Linking target lib/librte_reorder.so.24.0 00:02:13.267 [254/264] Linking target lib/librte_compressdev.so.24.0 00:02:13.267 [255/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:13.267 [256/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:13.527 [257/264] Linking target lib/librte_security.so.24.0 00:02:13.527 [258/264] Linking target lib/librte_hash.so.24.0 00:02:13.527 [259/264] Linking target lib/librte_cmdline.so.24.0 00:02:13.527 [260/264] Linking target lib/librte_ethdev.so.24.0 00:02:13.527 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:13.528 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:13.528 [263/264] Linking target lib/librte_power.so.24.0 00:02:13.788 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:13.788 INFO: autodetecting backend as ninja 00:02:13.789 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:14.732 CC lib/log/log_flags.o 00:02:14.732 CC lib/log/log.o 00:02:14.732 CC lib/log/log_deprecated.o 00:02:14.732 CC lib/ut_mock/mock.o 00:02:14.732 CC lib/ut/ut.o 00:02:14.732 LIB libspdk_ut_mock.a 00:02:14.732 LIB libspdk_ut.a 00:02:14.732 LIB libspdk_log.a 00:02:14.732 SO libspdk_ut_mock.so.5.0 00:02:14.732 SO libspdk_ut.so.1.0 00:02:14.732 SO libspdk_log.so.6.1 00:02:14.732 SYMLINK libspdk_ut_mock.so 00:02:14.732 SYMLINK libspdk_ut.so 00:02:14.732 SYMLINK libspdk_log.so 00:02:14.993 CC lib/ioat/ioat.o 00:02:14.993 CC lib/dma/dma.o 00:02:14.993 CXX lib/trace_parser/trace.o 00:02:14.993 CC lib/util/base64.o 00:02:14.993 CC lib/util/bit_array.o 00:02:14.993 CC lib/util/crc16.o 00:02:14.993 CC lib/util/cpuset.o 00:02:14.993 CC lib/util/crc32.o 00:02:14.993 CC lib/util/crc32c.o 00:02:14.993 CC lib/util/crc32_ieee.o 00:02:14.993 CC lib/util/crc64.o 00:02:14.993 CC lib/util/dif.o 00:02:14.993 CC lib/util/fd.o 00:02:14.993 CC lib/util/file.o 00:02:14.993 CC lib/util/hexlify.o 00:02:14.993 CC lib/util/iov.o 00:02:14.993 CC lib/util/math.o 00:02:14.993 CC lib/util/pipe.o 00:02:14.993 CC lib/util/strerror_tls.o 00:02:14.993 CC lib/util/fd_group.o 00:02:14.993 CC lib/util/string.o 00:02:14.993 CC lib/util/uuid.o 00:02:14.993 CC lib/util/xor.o 00:02:14.993 CC lib/util/zipf.o 00:02:15.255 CC lib/vfio_user/host/vfio_user_pci.o 00:02:15.255 CC lib/vfio_user/host/vfio_user.o 00:02:15.255 LIB libspdk_dma.a 00:02:15.255 SO libspdk_dma.so.3.0 00:02:15.255 LIB libspdk_ioat.a 00:02:15.255 SYMLINK libspdk_dma.so 00:02:15.255 SO libspdk_ioat.so.6.0 00:02:15.255 LIB libspdk_vfio_user.a 00:02:15.516 SYMLINK libspdk_ioat.so 00:02:15.516 SO libspdk_vfio_user.so.4.0 00:02:15.516 SYMLINK libspdk_vfio_user.so 00:02:15.516 LIB libspdk_util.a 00:02:15.516 SO libspdk_util.so.8.0 00:02:15.778 SYMLINK libspdk_util.so 00:02:15.778 LIB libspdk_trace_parser.a 00:02:15.778 SO libspdk_trace_parser.so.4.0 00:02:16.039 SYMLINK libspdk_trace_parser.so 00:02:16.039 CC lib/env_dpdk/env.o 00:02:16.039 CC lib/env_dpdk/pci.o 00:02:16.039 CC lib/env_dpdk/memory.o 00:02:16.039 CC lib/env_dpdk/init.o 00:02:16.039 CC lib/env_dpdk/threads.o 00:02:16.039 CC lib/env_dpdk/pci_ioat.o 00:02:16.039 CC lib/env_dpdk/pci_virtio.o 00:02:16.039 CC lib/vmd/vmd.o 00:02:16.039 CC lib/env_dpdk/pci_vmd.o 00:02:16.039 CC lib/env_dpdk/pci_idxd.o 00:02:16.039 CC lib/env_dpdk/pci_event.o 00:02:16.039 CC lib/vmd/led.o 00:02:16.039 CC lib/env_dpdk/sigbus_handler.o 00:02:16.039 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:16.039 CC lib/env_dpdk/pci_dpdk.o 00:02:16.039 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:16.039 CC lib/json/json_parse.o 00:02:16.039 CC lib/json/json_util.o 00:02:16.039 CC lib/json/json_write.o 00:02:16.039 CC lib/conf/conf.o 00:02:16.039 CC lib/rdma/common.o 00:02:16.039 CC lib/idxd/idxd.o 00:02:16.039 CC lib/rdma/rdma_verbs.o 00:02:16.039 CC lib/idxd/idxd_user.o 00:02:16.300 LIB libspdk_conf.a 00:02:16.300 SO libspdk_conf.so.5.0 00:02:16.300 LIB libspdk_rdma.a 00:02:16.300 LIB libspdk_json.a 00:02:16.300 SO libspdk_rdma.so.5.0 00:02:16.300 SO libspdk_json.so.5.1 00:02:16.300 SYMLINK libspdk_conf.so 00:02:16.300 SYMLINK libspdk_rdma.so 00:02:16.300 SYMLINK libspdk_json.so 00:02:16.300 LIB libspdk_idxd.a 00:02:16.561 SO libspdk_idxd.so.11.0 00:02:16.561 LIB libspdk_vmd.a 00:02:16.561 SO libspdk_vmd.so.5.0 00:02:16.561 SYMLINK libspdk_idxd.so 00:02:16.561 CC lib/jsonrpc/jsonrpc_server.o 00:02:16.561 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:16.561 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:16.561 CC lib/jsonrpc/jsonrpc_client.o 00:02:16.561 SYMLINK libspdk_vmd.so 00:02:16.878 LIB libspdk_jsonrpc.a 00:02:16.878 SO libspdk_jsonrpc.so.5.1 00:02:16.878 SYMLINK libspdk_jsonrpc.so 00:02:17.139 LIB libspdk_env_dpdk.a 00:02:17.139 SO libspdk_env_dpdk.so.13.0 00:02:17.139 CC lib/rpc/rpc.o 00:02:17.400 SYMLINK libspdk_env_dpdk.so 00:02:17.400 LIB libspdk_rpc.a 00:02:17.400 SO libspdk_rpc.so.5.0 00:02:17.400 SYMLINK libspdk_rpc.so 00:02:17.661 CC lib/trace/trace.o 00:02:17.661 CC lib/notify/notify.o 00:02:17.661 CC lib/trace/trace_flags.o 00:02:17.661 CC lib/notify/notify_rpc.o 00:02:17.661 CC lib/trace/trace_rpc.o 00:02:17.661 CC lib/sock/sock.o 00:02:17.661 CC lib/sock/sock_rpc.o 00:02:17.922 LIB libspdk_notify.a 00:02:17.922 SO libspdk_notify.so.5.0 00:02:17.922 LIB libspdk_trace.a 00:02:17.922 SO libspdk_trace.so.9.0 00:02:17.922 SYMLINK libspdk_notify.so 00:02:18.182 SYMLINK libspdk_trace.so 00:02:18.182 LIB libspdk_sock.a 00:02:18.182 SO libspdk_sock.so.8.0 00:02:18.182 SYMLINK libspdk_sock.so 00:02:18.182 CC lib/thread/thread.o 00:02:18.182 CC lib/thread/iobuf.o 00:02:18.443 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:18.443 CC lib/nvme/nvme_ctrlr.o 00:02:18.443 CC lib/nvme/nvme_fabric.o 00:02:18.443 CC lib/nvme/nvme_ns_cmd.o 00:02:18.443 CC lib/nvme/nvme_ns.o 00:02:18.443 CC lib/nvme/nvme_pcie_common.o 00:02:18.443 CC lib/nvme/nvme_pcie.o 00:02:18.443 CC lib/nvme/nvme_qpair.o 00:02:18.443 CC lib/nvme/nvme.o 00:02:18.443 CC lib/nvme/nvme_quirks.o 00:02:18.443 CC lib/nvme/nvme_transport.o 00:02:18.443 CC lib/nvme/nvme_discovery.o 00:02:18.443 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:18.443 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:18.443 CC lib/nvme/nvme_tcp.o 00:02:18.443 CC lib/nvme/nvme_opal.o 00:02:18.443 CC lib/nvme/nvme_io_msg.o 00:02:18.443 CC lib/nvme/nvme_poll_group.o 00:02:18.443 CC lib/nvme/nvme_zns.o 00:02:18.443 CC lib/nvme/nvme_cuse.o 00:02:18.443 CC lib/nvme/nvme_vfio_user.o 00:02:18.443 CC lib/nvme/nvme_rdma.o 00:02:19.388 LIB libspdk_thread.a 00:02:19.649 SO libspdk_thread.so.9.0 00:02:19.650 SYMLINK libspdk_thread.so 00:02:19.911 CC lib/init/json_config.o 00:02:19.911 CC lib/virtio/virtio.o 00:02:19.911 CC lib/virtio/virtio_vhost_user.o 00:02:19.911 CC lib/init/subsystem.o 00:02:19.911 CC lib/virtio/virtio_vfio_user.o 00:02:19.911 CC lib/init/subsystem_rpc.o 00:02:19.911 CC lib/virtio/virtio_pci.o 00:02:19.911 CC lib/init/rpc.o 00:02:19.911 CC lib/accel/accel.o 00:02:19.911 CC lib/accel/accel_rpc.o 00:02:19.911 CC lib/accel/accel_sw.o 00:02:19.911 CC lib/blob/blobstore.o 00:02:19.911 CC lib/blob/request.o 00:02:19.911 CC lib/blob/zeroes.o 00:02:19.911 CC lib/blob/blob_bs_dev.o 00:02:20.172 LIB libspdk_init.a 00:02:20.172 LIB libspdk_nvme.a 00:02:20.172 SO libspdk_init.so.4.0 00:02:20.172 LIB libspdk_virtio.a 00:02:20.172 SYMLINK libspdk_init.so 00:02:20.172 SO libspdk_virtio.so.6.0 00:02:20.172 SO libspdk_nvme.so.12.0 00:02:20.434 SYMLINK libspdk_virtio.so 00:02:20.434 CC lib/event/app.o 00:02:20.434 CC lib/event/reactor.o 00:02:20.434 CC lib/event/log_rpc.o 00:02:20.434 CC lib/event/app_rpc.o 00:02:20.434 CC lib/event/scheduler_static.o 00:02:20.434 SYMLINK libspdk_nvme.so 00:02:20.696 LIB libspdk_accel.a 00:02:20.696 SO libspdk_accel.so.14.0 00:02:20.958 LIB libspdk_event.a 00:02:20.958 SYMLINK libspdk_accel.so 00:02:20.958 SO libspdk_event.so.12.0 00:02:20.958 SYMLINK libspdk_event.so 00:02:21.219 CC lib/bdev/bdev_rpc.o 00:02:21.219 CC lib/bdev/bdev.o 00:02:21.219 CC lib/bdev/part.o 00:02:21.219 CC lib/bdev/bdev_zone.o 00:02:21.219 CC lib/bdev/scsi_nvme.o 00:02:22.159 LIB libspdk_blob.a 00:02:22.159 SO libspdk_blob.so.10.1 00:02:22.420 SYMLINK libspdk_blob.so 00:02:22.681 CC lib/lvol/lvol.o 00:02:22.681 CC lib/blobfs/blobfs.o 00:02:22.681 CC lib/blobfs/tree.o 00:02:23.254 LIB libspdk_bdev.a 00:02:23.254 LIB libspdk_blobfs.a 00:02:23.254 SO libspdk_blobfs.so.9.0 00:02:23.254 SO libspdk_bdev.so.14.0 00:02:23.254 LIB libspdk_lvol.a 00:02:23.254 SO libspdk_lvol.so.9.1 00:02:23.545 SYMLINK libspdk_blobfs.so 00:02:23.546 SYMLINK libspdk_bdev.so 00:02:23.546 SYMLINK libspdk_lvol.so 00:02:23.546 CC lib/nvmf/ctrlr.o 00:02:23.546 CC lib/nvmf/ctrlr_discovery.o 00:02:23.546 CC lib/nvmf/ctrlr_bdev.o 00:02:23.546 CC lib/ublk/ublk.o 00:02:23.546 CC lib/ublk/ublk_rpc.o 00:02:23.546 CC lib/nvmf/subsystem.o 00:02:23.546 CC lib/nvmf/nvmf.o 00:02:23.546 CC lib/nvmf/nvmf_rpc.o 00:02:23.546 CC lib/nvmf/transport.o 00:02:23.546 CC lib/nvmf/tcp.o 00:02:23.546 CC lib/nvmf/rdma.o 00:02:23.546 CC lib/ftl/ftl_core.o 00:02:23.546 CC lib/ftl/ftl_init.o 00:02:23.546 CC lib/ftl/ftl_layout.o 00:02:23.546 CC lib/ftl/ftl_debug.o 00:02:23.546 CC lib/ftl/ftl_sb.o 00:02:23.546 CC lib/ftl/ftl_io.o 00:02:23.546 CC lib/scsi/dev.o 00:02:23.546 CC lib/ftl/ftl_l2p.o 00:02:23.546 CC lib/scsi/port.o 00:02:23.546 CC lib/scsi/lun.o 00:02:23.546 CC lib/nbd/nbd.o 00:02:23.546 CC lib/ftl/ftl_l2p_flat.o 00:02:23.546 CC lib/ftl/ftl_nv_cache.o 00:02:23.805 CC lib/ftl/ftl_writer.o 00:02:23.805 CC lib/ftl/ftl_band.o 00:02:23.805 CC lib/scsi/scsi.o 00:02:23.805 CC lib/ftl/ftl_band_ops.o 00:02:23.805 CC lib/nbd/nbd_rpc.o 00:02:23.805 CC lib/scsi/scsi_bdev.o 00:02:23.805 CC lib/scsi/scsi_rpc.o 00:02:23.805 CC lib/ftl/ftl_rq.o 00:02:23.805 CC lib/ftl/ftl_l2p_cache.o 00:02:23.805 CC lib/scsi/scsi_pr.o 00:02:23.805 CC lib/ftl/ftl_reloc.o 00:02:23.805 CC lib/scsi/task.o 00:02:23.805 CC lib/ftl/ftl_p2l.o 00:02:23.805 CC lib/ftl/mngt/ftl_mngt.o 00:02:23.805 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:23.806 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:23.806 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:23.806 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:23.806 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:23.806 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:23.806 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:23.806 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:23.806 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:23.806 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:23.806 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:23.806 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:23.806 CC lib/ftl/utils/ftl_conf.o 00:02:23.806 CC lib/ftl/utils/ftl_md.o 00:02:23.806 CC lib/ftl/utils/ftl_mempool.o 00:02:23.806 CC lib/ftl/utils/ftl_bitmap.o 00:02:23.806 CC lib/ftl/utils/ftl_property.o 00:02:23.806 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:23.806 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:23.806 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:23.806 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:23.806 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:23.806 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:23.806 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:23.806 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:23.806 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:23.806 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:23.806 CC lib/ftl/base/ftl_base_bdev.o 00:02:23.806 CC lib/ftl/base/ftl_base_dev.o 00:02:23.806 CC lib/ftl/ftl_trace.o 00:02:24.066 LIB libspdk_nbd.a 00:02:24.066 SO libspdk_nbd.so.6.0 00:02:24.066 LIB libspdk_scsi.a 00:02:24.327 SO libspdk_scsi.so.8.0 00:02:24.327 SYMLINK libspdk_nbd.so 00:02:24.327 LIB libspdk_ublk.a 00:02:24.327 SO libspdk_ublk.so.2.0 00:02:24.327 SYMLINK libspdk_scsi.so 00:02:24.327 SYMLINK libspdk_ublk.so 00:02:24.589 LIB libspdk_ftl.a 00:02:24.589 CC lib/iscsi/conn.o 00:02:24.589 CC lib/iscsi/init_grp.o 00:02:24.589 CC lib/vhost/vhost_rpc.o 00:02:24.589 CC lib/vhost/vhost.o 00:02:24.589 CC lib/iscsi/iscsi.o 00:02:24.589 CC lib/vhost/vhost_blk.o 00:02:24.589 CC lib/iscsi/md5.o 00:02:24.589 CC lib/iscsi/portal_grp.o 00:02:24.589 CC lib/vhost/vhost_scsi.o 00:02:24.589 CC lib/iscsi/param.o 00:02:24.589 CC lib/iscsi/tgt_node.o 00:02:24.589 CC lib/vhost/rte_vhost_user.o 00:02:24.589 CC lib/iscsi/iscsi_subsystem.o 00:02:24.589 CC lib/iscsi/iscsi_rpc.o 00:02:24.589 CC lib/iscsi/task.o 00:02:24.589 SO libspdk_ftl.so.8.0 00:02:25.162 SYMLINK libspdk_ftl.so 00:02:25.424 LIB libspdk_nvmf.a 00:02:25.424 SO libspdk_nvmf.so.17.0 00:02:25.424 LIB libspdk_vhost.a 00:02:25.684 SO libspdk_vhost.so.7.1 00:02:25.684 SYMLINK libspdk_nvmf.so 00:02:25.684 SYMLINK libspdk_vhost.so 00:02:25.684 LIB libspdk_iscsi.a 00:02:25.684 SO libspdk_iscsi.so.7.0 00:02:25.946 SYMLINK libspdk_iscsi.so 00:02:26.206 CC module/env_dpdk/env_dpdk_rpc.o 00:02:26.467 CC module/accel/iaa/accel_iaa.o 00:02:26.467 CC module/accel/iaa/accel_iaa_rpc.o 00:02:26.467 CC module/accel/error/accel_error_rpc.o 00:02:26.467 CC module/accel/error/accel_error.o 00:02:26.467 CC module/accel/dsa/accel_dsa.o 00:02:26.467 CC module/accel/dsa/accel_dsa_rpc.o 00:02:26.467 CC module/blob/bdev/blob_bdev.o 00:02:26.467 CC module/sock/posix/posix.o 00:02:26.467 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:26.467 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:26.467 CC module/accel/ioat/accel_ioat.o 00:02:26.467 CC module/accel/ioat/accel_ioat_rpc.o 00:02:26.467 CC module/scheduler/gscheduler/gscheduler.o 00:02:26.467 LIB libspdk_env_dpdk_rpc.a 00:02:26.467 SO libspdk_env_dpdk_rpc.so.5.0 00:02:26.467 LIB libspdk_scheduler_dpdk_governor.a 00:02:26.467 SYMLINK libspdk_env_dpdk_rpc.so 00:02:26.467 LIB libspdk_scheduler_gscheduler.a 00:02:26.467 LIB libspdk_accel_iaa.a 00:02:26.467 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:26.467 LIB libspdk_accel_error.a 00:02:26.728 SO libspdk_scheduler_gscheduler.so.3.0 00:02:26.728 LIB libspdk_scheduler_dynamic.a 00:02:26.728 LIB libspdk_accel_ioat.a 00:02:26.728 SO libspdk_accel_iaa.so.2.0 00:02:26.728 LIB libspdk_accel_dsa.a 00:02:26.728 SO libspdk_accel_error.so.1.0 00:02:26.728 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:26.728 SO libspdk_scheduler_dynamic.so.3.0 00:02:26.728 SO libspdk_accel_ioat.so.5.0 00:02:26.728 LIB libspdk_blob_bdev.a 00:02:26.728 SYMLINK libspdk_scheduler_gscheduler.so 00:02:26.728 SO libspdk_accel_dsa.so.4.0 00:02:26.728 SYMLINK libspdk_accel_iaa.so 00:02:26.728 SO libspdk_blob_bdev.so.10.1 00:02:26.728 SYMLINK libspdk_accel_error.so 00:02:26.728 SYMLINK libspdk_scheduler_dynamic.so 00:02:26.728 SYMLINK libspdk_accel_ioat.so 00:02:26.728 SYMLINK libspdk_accel_dsa.so 00:02:26.728 SYMLINK libspdk_blob_bdev.so 00:02:26.989 LIB libspdk_sock_posix.a 00:02:26.989 SO libspdk_sock_posix.so.5.0 00:02:26.989 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:26.989 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:26.989 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:26.989 CC module/bdev/raid/bdev_raid.o 00:02:26.989 CC module/bdev/raid/bdev_raid_rpc.o 00:02:26.989 CC module/bdev/null/bdev_null.o 00:02:26.989 CC module/bdev/raid/raid0.o 00:02:26.989 CC module/bdev/raid/bdev_raid_sb.o 00:02:26.989 CC module/bdev/null/bdev_null_rpc.o 00:02:26.989 CC module/bdev/raid/raid1.o 00:02:26.989 CC module/bdev/raid/concat.o 00:02:26.989 CC module/bdev/error/vbdev_error.o 00:02:26.989 CC module/bdev/gpt/gpt.o 00:02:26.989 CC module/bdev/delay/vbdev_delay.o 00:02:26.989 CC module/bdev/gpt/vbdev_gpt.o 00:02:26.989 CC module/bdev/lvol/vbdev_lvol.o 00:02:26.989 CC module/bdev/error/vbdev_error_rpc.o 00:02:26.989 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:26.989 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:26.989 CC module/bdev/ftl/bdev_ftl.o 00:02:26.989 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:26.989 CC module/bdev/nvme/bdev_nvme.o 00:02:27.250 CC module/bdev/passthru/vbdev_passthru.o 00:02:27.250 CC module/bdev/iscsi/bdev_iscsi.o 00:02:27.250 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:27.250 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:27.250 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:27.250 CC module/bdev/malloc/bdev_malloc.o 00:02:27.250 CC module/bdev/nvme/nvme_rpc.o 00:02:27.250 CC module/bdev/nvme/bdev_mdns_client.o 00:02:27.250 CC module/bdev/aio/bdev_aio.o 00:02:27.250 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:27.250 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:27.250 CC module/bdev/nvme/vbdev_opal.o 00:02:27.250 CC module/bdev/aio/bdev_aio_rpc.o 00:02:27.250 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:27.250 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:27.250 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:27.250 CC module/bdev/split/vbdev_split.o 00:02:27.250 CC module/bdev/split/vbdev_split_rpc.o 00:02:27.250 CC module/blobfs/bdev/blobfs_bdev.o 00:02:27.250 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:27.250 SYMLINK libspdk_sock_posix.so 00:02:27.250 LIB libspdk_blobfs_bdev.a 00:02:27.250 LIB libspdk_bdev_null.a 00:02:27.250 SO libspdk_blobfs_bdev.so.5.0 00:02:27.250 LIB libspdk_bdev_split.a 00:02:27.250 LIB libspdk_bdev_gpt.a 00:02:27.511 LIB libspdk_bdev_error.a 00:02:27.511 SO libspdk_bdev_null.so.5.0 00:02:27.511 SO libspdk_bdev_split.so.5.0 00:02:27.511 SO libspdk_bdev_gpt.so.5.0 00:02:27.511 LIB libspdk_bdev_passthru.a 00:02:27.511 SO libspdk_bdev_error.so.5.0 00:02:27.511 LIB libspdk_bdev_ftl.a 00:02:27.511 SYMLINK libspdk_blobfs_bdev.so 00:02:27.511 LIB libspdk_bdev_aio.a 00:02:27.511 SO libspdk_bdev_passthru.so.5.0 00:02:27.511 SYMLINK libspdk_bdev_null.so 00:02:27.511 LIB libspdk_bdev_malloc.a 00:02:27.511 SYMLINK libspdk_bdev_split.so 00:02:27.511 SYMLINK libspdk_bdev_gpt.so 00:02:27.511 LIB libspdk_bdev_delay.a 00:02:27.511 LIB libspdk_bdev_zone_block.a 00:02:27.511 SO libspdk_bdev_ftl.so.5.0 00:02:27.511 LIB libspdk_bdev_iscsi.a 00:02:27.511 SO libspdk_bdev_aio.so.5.0 00:02:27.511 SYMLINK libspdk_bdev_error.so 00:02:27.511 SO libspdk_bdev_delay.so.5.0 00:02:27.511 SO libspdk_bdev_malloc.so.5.0 00:02:27.511 SYMLINK libspdk_bdev_passthru.so 00:02:27.511 SO libspdk_bdev_zone_block.so.5.0 00:02:27.511 SO libspdk_bdev_iscsi.so.5.0 00:02:27.511 SYMLINK libspdk_bdev_ftl.so 00:02:27.511 SYMLINK libspdk_bdev_aio.so 00:02:27.511 LIB libspdk_bdev_virtio.a 00:02:27.511 LIB libspdk_bdev_lvol.a 00:02:27.511 SYMLINK libspdk_bdev_delay.so 00:02:27.511 SYMLINK libspdk_bdev_malloc.so 00:02:27.511 SYMLINK libspdk_bdev_zone_block.so 00:02:27.511 SYMLINK libspdk_bdev_iscsi.so 00:02:27.511 SO libspdk_bdev_lvol.so.5.0 00:02:27.511 SO libspdk_bdev_virtio.so.5.0 00:02:27.772 SYMLINK libspdk_bdev_lvol.so 00:02:27.772 SYMLINK libspdk_bdev_virtio.so 00:02:27.772 LIB libspdk_bdev_raid.a 00:02:28.032 SO libspdk_bdev_raid.so.5.0 00:02:28.032 SYMLINK libspdk_bdev_raid.so 00:02:28.972 LIB libspdk_bdev_nvme.a 00:02:28.972 SO libspdk_bdev_nvme.so.6.0 00:02:28.972 SYMLINK libspdk_bdev_nvme.so 00:02:29.543 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:29.543 CC module/event/subsystems/iobuf/iobuf.o 00:02:29.543 CC module/event/subsystems/sock/sock.o 00:02:29.543 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:29.543 CC module/event/subsystems/vmd/vmd.o 00:02:29.543 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:29.543 CC module/event/subsystems/scheduler/scheduler.o 00:02:29.543 LIB libspdk_event_vhost_blk.a 00:02:29.543 SO libspdk_event_vhost_blk.so.2.0 00:02:29.543 LIB libspdk_event_sock.a 00:02:29.543 LIB libspdk_event_vmd.a 00:02:29.543 LIB libspdk_event_scheduler.a 00:02:29.543 LIB libspdk_event_iobuf.a 00:02:29.543 SO libspdk_event_sock.so.4.0 00:02:29.804 SO libspdk_event_scheduler.so.3.0 00:02:29.804 SO libspdk_event_vmd.so.5.0 00:02:29.804 SYMLINK libspdk_event_vhost_blk.so 00:02:29.804 SO libspdk_event_iobuf.so.2.0 00:02:29.804 SYMLINK libspdk_event_sock.so 00:02:29.804 SYMLINK libspdk_event_vmd.so 00:02:29.804 SYMLINK libspdk_event_scheduler.so 00:02:29.804 SYMLINK libspdk_event_iobuf.so 00:02:30.064 CC module/event/subsystems/accel/accel.o 00:02:30.064 LIB libspdk_event_accel.a 00:02:30.064 SO libspdk_event_accel.so.5.0 00:02:30.324 SYMLINK libspdk_event_accel.so 00:02:30.584 CC module/event/subsystems/bdev/bdev.o 00:02:30.584 LIB libspdk_event_bdev.a 00:02:30.584 SO libspdk_event_bdev.so.5.0 00:02:30.846 SYMLINK libspdk_event_bdev.so 00:02:30.846 CC module/event/subsystems/scsi/scsi.o 00:02:30.846 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:30.846 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:30.846 CC module/event/subsystems/nbd/nbd.o 00:02:30.846 CC module/event/subsystems/ublk/ublk.o 00:02:31.108 LIB libspdk_event_scsi.a 00:02:31.108 LIB libspdk_event_nbd.a 00:02:31.108 LIB libspdk_event_ublk.a 00:02:31.108 SO libspdk_event_scsi.so.5.0 00:02:31.108 SO libspdk_event_nbd.so.5.0 00:02:31.108 SO libspdk_event_ublk.so.2.0 00:02:31.108 LIB libspdk_event_nvmf.a 00:02:31.108 SYMLINK libspdk_event_scsi.so 00:02:31.108 SYMLINK libspdk_event_nbd.so 00:02:31.108 SO libspdk_event_nvmf.so.5.0 00:02:31.368 SYMLINK libspdk_event_ublk.so 00:02:31.368 SYMLINK libspdk_event_nvmf.so 00:02:31.368 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:31.369 CC module/event/subsystems/iscsi/iscsi.o 00:02:31.630 LIB libspdk_event_vhost_scsi.a 00:02:31.630 LIB libspdk_event_iscsi.a 00:02:31.630 SO libspdk_event_vhost_scsi.so.2.0 00:02:31.630 SO libspdk_event_iscsi.so.5.0 00:02:31.630 SYMLINK libspdk_event_vhost_scsi.so 00:02:31.630 SYMLINK libspdk_event_iscsi.so 00:02:31.891 SO libspdk.so.5.0 00:02:31.891 SYMLINK libspdk.so 00:02:32.151 CXX app/trace/trace.o 00:02:32.151 CC app/spdk_top/spdk_top.o 00:02:32.151 TEST_HEADER include/spdk/assert.h 00:02:32.151 TEST_HEADER include/spdk/accel_module.h 00:02:32.151 TEST_HEADER include/spdk/accel.h 00:02:32.151 TEST_HEADER include/spdk/barrier.h 00:02:32.151 TEST_HEADER include/spdk/bdev.h 00:02:32.151 TEST_HEADER include/spdk/base64.h 00:02:32.151 TEST_HEADER include/spdk/bdev_zone.h 00:02:32.151 TEST_HEADER include/spdk/bdev_module.h 00:02:32.151 TEST_HEADER include/spdk/bit_array.h 00:02:32.151 CC app/spdk_lspci/spdk_lspci.o 00:02:32.151 TEST_HEADER include/spdk/bit_pool.h 00:02:32.151 CC app/spdk_nvme_perf/perf.o 00:02:32.151 CC app/spdk_nvme_discover/discovery_aer.o 00:02:32.151 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:32.151 TEST_HEADER include/spdk/blob_bdev.h 00:02:32.151 TEST_HEADER include/spdk/blobfs.h 00:02:32.151 TEST_HEADER include/spdk/config.h 00:02:32.151 TEST_HEADER include/spdk/blob.h 00:02:32.151 TEST_HEADER include/spdk/conf.h 00:02:32.151 TEST_HEADER include/spdk/cpuset.h 00:02:32.151 CC app/trace_record/trace_record.o 00:02:32.151 TEST_HEADER include/spdk/crc32.h 00:02:32.151 TEST_HEADER include/spdk/crc16.h 00:02:32.151 TEST_HEADER include/spdk/crc64.h 00:02:32.151 TEST_HEADER include/spdk/dif.h 00:02:32.151 CC test/rpc_client/rpc_client_test.o 00:02:32.151 TEST_HEADER include/spdk/dma.h 00:02:32.151 CC app/spdk_nvme_identify/identify.o 00:02:32.151 TEST_HEADER include/spdk/endian.h 00:02:32.151 TEST_HEADER include/spdk/env_dpdk.h 00:02:32.151 TEST_HEADER include/spdk/env.h 00:02:32.151 TEST_HEADER include/spdk/fd_group.h 00:02:32.152 TEST_HEADER include/spdk/event.h 00:02:32.152 TEST_HEADER include/spdk/fd.h 00:02:32.152 TEST_HEADER include/spdk/ftl.h 00:02:32.152 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:32.152 TEST_HEADER include/spdk/file.h 00:02:32.152 TEST_HEADER include/spdk/hexlify.h 00:02:32.152 TEST_HEADER include/spdk/gpt_spec.h 00:02:32.152 CC app/iscsi_tgt/iscsi_tgt.o 00:02:32.152 TEST_HEADER include/spdk/idxd.h 00:02:32.152 TEST_HEADER include/spdk/histogram_data.h 00:02:32.152 TEST_HEADER include/spdk/idxd_spec.h 00:02:32.152 TEST_HEADER include/spdk/init.h 00:02:32.152 TEST_HEADER include/spdk/ioat.h 00:02:32.152 TEST_HEADER include/spdk/ioat_spec.h 00:02:32.152 TEST_HEADER include/spdk/iscsi_spec.h 00:02:32.152 TEST_HEADER include/spdk/json.h 00:02:32.152 TEST_HEADER include/spdk/likely.h 00:02:32.152 TEST_HEADER include/spdk/jsonrpc.h 00:02:32.152 TEST_HEADER include/spdk/log.h 00:02:32.152 TEST_HEADER include/spdk/lvol.h 00:02:32.152 CC app/spdk_tgt/spdk_tgt.o 00:02:32.152 TEST_HEADER include/spdk/mmio.h 00:02:32.152 TEST_HEADER include/spdk/nbd.h 00:02:32.152 CC app/vhost/vhost.o 00:02:32.152 CC app/spdk_dd/spdk_dd.o 00:02:32.152 TEST_HEADER include/spdk/memory.h 00:02:32.152 TEST_HEADER include/spdk/nvme_intel.h 00:02:32.152 TEST_HEADER include/spdk/nvme.h 00:02:32.152 TEST_HEADER include/spdk/notify.h 00:02:32.152 CC app/nvmf_tgt/nvmf_main.o 00:02:32.152 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:32.152 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:32.152 TEST_HEADER include/spdk/nvme_spec.h 00:02:32.152 TEST_HEADER include/spdk/nvme_zns.h 00:02:32.152 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:32.152 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:32.152 TEST_HEADER include/spdk/nvmf.h 00:02:32.152 TEST_HEADER include/spdk/nvmf_spec.h 00:02:32.152 TEST_HEADER include/spdk/nvmf_transport.h 00:02:32.152 TEST_HEADER include/spdk/opal.h 00:02:32.152 TEST_HEADER include/spdk/opal_spec.h 00:02:32.152 TEST_HEADER include/spdk/pci_ids.h 00:02:32.152 TEST_HEADER include/spdk/pipe.h 00:02:32.152 TEST_HEADER include/spdk/queue.h 00:02:32.152 TEST_HEADER include/spdk/reduce.h 00:02:32.152 TEST_HEADER include/spdk/rpc.h 00:02:32.152 TEST_HEADER include/spdk/scheduler.h 00:02:32.152 TEST_HEADER include/spdk/scsi_spec.h 00:02:32.152 TEST_HEADER include/spdk/sock.h 00:02:32.152 TEST_HEADER include/spdk/scsi.h 00:02:32.152 TEST_HEADER include/spdk/stdinc.h 00:02:32.152 TEST_HEADER include/spdk/thread.h 00:02:32.152 TEST_HEADER include/spdk/string.h 00:02:32.152 TEST_HEADER include/spdk/trace.h 00:02:32.152 TEST_HEADER include/spdk/trace_parser.h 00:02:32.152 TEST_HEADER include/spdk/tree.h 00:02:32.152 TEST_HEADER include/spdk/ublk.h 00:02:32.152 TEST_HEADER include/spdk/uuid.h 00:02:32.152 TEST_HEADER include/spdk/util.h 00:02:32.152 TEST_HEADER include/spdk/version.h 00:02:32.152 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:32.152 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:32.152 TEST_HEADER include/spdk/vhost.h 00:02:32.152 TEST_HEADER include/spdk/vmd.h 00:02:32.152 TEST_HEADER include/spdk/xor.h 00:02:32.152 CXX test/cpp_headers/accel.o 00:02:32.152 TEST_HEADER include/spdk/zipf.h 00:02:32.152 CXX test/cpp_headers/accel_module.o 00:02:32.152 CXX test/cpp_headers/assert.o 00:02:32.152 CXX test/cpp_headers/barrier.o 00:02:32.152 CXX test/cpp_headers/bdev_module.o 00:02:32.152 CXX test/cpp_headers/base64.o 00:02:32.152 CXX test/cpp_headers/bdev.o 00:02:32.152 CXX test/cpp_headers/bdev_zone.o 00:02:32.152 CXX test/cpp_headers/blob_bdev.o 00:02:32.152 CXX test/cpp_headers/bit_array.o 00:02:32.152 CXX test/cpp_headers/bit_pool.o 00:02:32.152 CXX test/cpp_headers/blobfs_bdev.o 00:02:32.152 CXX test/cpp_headers/blobfs.o 00:02:32.152 CXX test/cpp_headers/blob.o 00:02:32.152 CXX test/cpp_headers/conf.o 00:02:32.152 CXX test/cpp_headers/config.o 00:02:32.152 CXX test/cpp_headers/cpuset.o 00:02:32.152 CXX test/cpp_headers/crc32.o 00:02:32.152 CXX test/cpp_headers/crc16.o 00:02:32.152 CXX test/cpp_headers/dif.o 00:02:32.152 CXX test/cpp_headers/crc64.o 00:02:32.152 CXX test/cpp_headers/endian.o 00:02:32.152 CXX test/cpp_headers/dma.o 00:02:32.152 CXX test/cpp_headers/env_dpdk.o 00:02:32.152 CXX test/cpp_headers/event.o 00:02:32.152 CXX test/cpp_headers/fd.o 00:02:32.152 CXX test/cpp_headers/env.o 00:02:32.152 CXX test/cpp_headers/fd_group.o 00:02:32.152 CXX test/cpp_headers/gpt_spec.o 00:02:32.152 CXX test/cpp_headers/file.o 00:02:32.152 CXX test/cpp_headers/hexlify.o 00:02:32.152 CXX test/cpp_headers/histogram_data.o 00:02:32.418 CXX test/cpp_headers/ftl.o 00:02:32.418 CC examples/util/zipf/zipf.o 00:02:32.418 CXX test/cpp_headers/idxd.o 00:02:32.418 CXX test/cpp_headers/idxd_spec.o 00:02:32.418 CXX test/cpp_headers/init.o 00:02:32.418 CC test/nvme/reset/reset.o 00:02:32.418 CXX test/cpp_headers/ioat.o 00:02:32.418 CC examples/vmd/lsvmd/lsvmd.o 00:02:32.418 CXX test/cpp_headers/ioat_spec.o 00:02:32.418 CXX test/cpp_headers/jsonrpc.o 00:02:32.418 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:32.418 CXX test/cpp_headers/iscsi_spec.o 00:02:32.418 CXX test/cpp_headers/json.o 00:02:32.418 CXX test/cpp_headers/likely.o 00:02:32.418 CXX test/cpp_headers/log.o 00:02:32.418 CC test/thread/poller_perf/poller_perf.o 00:02:32.418 CC examples/ioat/verify/verify.o 00:02:32.418 CXX test/cpp_headers/memory.o 00:02:32.418 CXX test/cpp_headers/mmio.o 00:02:32.418 CXX test/cpp_headers/lvol.o 00:02:32.418 CC test/env/vtophys/vtophys.o 00:02:32.418 CC test/env/memory/memory_ut.o 00:02:32.418 CXX test/cpp_headers/notify.o 00:02:32.418 CXX test/cpp_headers/nbd.o 00:02:32.418 CXX test/cpp_headers/nvme.o 00:02:32.418 CC test/nvme/aer/aer.o 00:02:32.418 CC examples/sock/hello_world/hello_sock.o 00:02:32.418 CC test/app/stub/stub.o 00:02:32.418 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:32.418 CC examples/nvme/hello_world/hello_world.o 00:02:32.418 CXX test/cpp_headers/nvme_intel.o 00:02:32.418 CXX test/cpp_headers/nvme_ocssd.o 00:02:32.418 CC app/fio/nvme/fio_plugin.o 00:02:32.418 CC test/app/histogram_perf/histogram_perf.o 00:02:32.418 CXX test/cpp_headers/nvme_spec.o 00:02:32.418 CC examples/vmd/led/led.o 00:02:32.418 CC test/app/jsoncat/jsoncat.o 00:02:32.418 CXX test/cpp_headers/nvme_zns.o 00:02:32.418 CXX test/cpp_headers/nvmf_cmd.o 00:02:32.418 CC examples/accel/perf/accel_perf.o 00:02:32.418 CC test/env/pci/pci_ut.o 00:02:32.418 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:32.418 CXX test/cpp_headers/nvmf.o 00:02:32.418 CC test/nvme/connect_stress/connect_stress.o 00:02:32.418 CC test/nvme/err_injection/err_injection.o 00:02:32.418 CXX test/cpp_headers/nvmf_transport.o 00:02:32.418 CXX test/cpp_headers/nvmf_spec.o 00:02:32.418 CC examples/ioat/perf/perf.o 00:02:32.418 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:32.418 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:32.418 CC test/nvme/overhead/overhead.o 00:02:32.418 CXX test/cpp_headers/opal.o 00:02:32.418 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:32.418 CC test/nvme/cuse/cuse.o 00:02:32.418 CC examples/idxd/perf/perf.o 00:02:32.418 CC test/nvme/compliance/nvme_compliance.o 00:02:32.418 CC test/nvme/fused_ordering/fused_ordering.o 00:02:32.418 CC test/nvme/e2edp/nvme_dp.o 00:02:32.418 CC test/event/reactor/reactor.o 00:02:32.418 CC test/nvme/sgl/sgl.o 00:02:32.418 CC examples/nvme/arbitration/arbitration.o 00:02:32.418 CC test/nvme/fdp/fdp.o 00:02:32.418 CXX test/cpp_headers/queue.o 00:02:32.418 CC test/event/event_perf/event_perf.o 00:02:32.418 CXX test/cpp_headers/opal_spec.o 00:02:32.418 CXX test/cpp_headers/pipe.o 00:02:32.418 CXX test/cpp_headers/reduce.o 00:02:32.418 CC examples/nvme/reconnect/reconnect.o 00:02:32.418 CC test/nvme/startup/startup.o 00:02:32.418 CC test/nvme/reserve/reserve.o 00:02:32.418 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:32.418 CXX test/cpp_headers/pci_ids.o 00:02:32.418 CC examples/nvme/abort/abort.o 00:02:32.418 CXX test/cpp_headers/rpc.o 00:02:32.418 CC examples/nvme/hotplug/hotplug.o 00:02:32.418 CC examples/blob/hello_world/hello_blob.o 00:02:32.418 CXX test/cpp_headers/scheduler.o 00:02:32.418 CC test/nvme/simple_copy/simple_copy.o 00:02:32.418 CC test/event/app_repeat/app_repeat.o 00:02:32.418 CXX test/cpp_headers/scsi.o 00:02:32.418 CC examples/blob/cli/blobcli.o 00:02:32.418 CC examples/thread/thread/thread_ex.o 00:02:32.418 CC test/nvme/boot_partition/boot_partition.o 00:02:32.418 CC test/event/reactor_perf/reactor_perf.o 00:02:32.418 CC test/accel/dif/dif.o 00:02:32.418 CC examples/bdev/bdevperf/bdevperf.o 00:02:32.418 CC test/bdev/bdevio/bdevio.o 00:02:32.418 CC examples/bdev/hello_world/hello_bdev.o 00:02:32.418 CC app/fio/bdev/fio_plugin.o 00:02:32.418 CC test/blobfs/mkfs/mkfs.o 00:02:32.418 CC examples/nvmf/nvmf/nvmf.o 00:02:32.418 CXX test/cpp_headers/scsi_spec.o 00:02:32.418 CC test/dma/test_dma/test_dma.o 00:02:32.418 CC test/app/bdev_svc/bdev_svc.o 00:02:32.418 CC test/event/scheduler/scheduler.o 00:02:32.418 CXX test/cpp_headers/sock.o 00:02:32.418 LINK spdk_lspci 00:02:32.682 CC test/lvol/esnap/esnap.o 00:02:32.682 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:32.682 CC test/env/mem_callbacks/mem_callbacks.o 00:02:32.682 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:32.682 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:32.682 LINK spdk_nvme_discover 00:02:32.682 LINK interrupt_tgt 00:02:32.682 LINK nvmf_tgt 00:02:32.682 LINK vhost 00:02:32.682 LINK lsvmd 00:02:32.682 LINK led 00:02:32.682 LINK spdk_trace_record 00:02:32.943 LINK env_dpdk_post_init 00:02:32.943 LINK rpc_client_test 00:02:32.943 LINK zipf 00:02:32.943 LINK spdk_tgt 00:02:32.943 LINK iscsi_tgt 00:02:32.943 LINK poller_perf 00:02:32.943 LINK jsoncat 00:02:32.943 LINK histogram_perf 00:02:32.943 LINK reactor 00:02:32.943 LINK startup 00:02:32.943 LINK event_perf 00:02:32.943 LINK reactor_perf 00:02:32.943 LINK vtophys 00:02:32.943 LINK stub 00:02:32.943 LINK boot_partition 00:02:32.943 LINK fused_ordering 00:02:32.943 LINK err_injection 00:02:32.943 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:32.943 LINK connect_stress 00:02:32.943 LINK app_repeat 00:02:32.943 LINK pmr_persistence 00:02:32.943 LINK cmb_copy 00:02:32.943 LINK reset 00:02:32.943 LINK reserve 00:02:32.943 LINK hello_blob 00:02:32.943 CXX test/cpp_headers/stdinc.o 00:02:32.943 LINK hello_world 00:02:32.943 CXX test/cpp_headers/string.o 00:02:32.943 LINK verify 00:02:32.943 LINK mkfs 00:02:32.943 LINK doorbell_aers 00:02:32.943 CXX test/cpp_headers/thread.o 00:02:32.943 CXX test/cpp_headers/trace.o 00:02:32.943 CXX test/cpp_headers/trace_parser.o 00:02:32.943 CXX test/cpp_headers/tree.o 00:02:32.943 CXX test/cpp_headers/ublk.o 00:02:32.943 CXX test/cpp_headers/util.o 00:02:32.943 CXX test/cpp_headers/uuid.o 00:02:32.943 LINK aer 00:02:32.943 CXX test/cpp_headers/version.o 00:02:32.943 CXX test/cpp_headers/vfio_user_pci.o 00:02:32.943 LINK thread 00:02:32.943 CXX test/cpp_headers/vhost.o 00:02:32.943 CXX test/cpp_headers/vfio_user_spec.o 00:02:32.943 CXX test/cpp_headers/vmd.o 00:02:32.943 CXX test/cpp_headers/xor.o 00:02:32.943 CXX test/cpp_headers/zipf.o 00:02:32.943 LINK ioat_perf 00:02:32.943 LINK simple_copy 00:02:33.204 LINK bdev_svc 00:02:33.204 LINK scheduler 00:02:33.204 LINK hello_sock 00:02:33.204 LINK spdk_dd 00:02:33.204 LINK nvme_dp 00:02:33.204 LINK overhead 00:02:33.204 LINK hotplug 00:02:33.204 LINK fdp 00:02:33.204 LINK hello_bdev 00:02:33.204 LINK sgl 00:02:33.204 LINK reconnect 00:02:33.204 LINK spdk_trace 00:02:33.204 LINK nvme_compliance 00:02:33.204 LINK dif 00:02:33.204 LINK idxd_perf 00:02:33.204 LINK arbitration 00:02:33.204 LINK bdevio 00:02:33.204 LINK abort 00:02:33.204 LINK nvmf 00:02:33.204 LINK test_dma 00:02:33.204 LINK pci_ut 00:02:33.204 LINK nvme_fuzz 00:02:33.204 LINK nvme_manage 00:02:33.204 LINK accel_perf 00:02:33.465 LINK blobcli 00:02:33.465 LINK spdk_bdev 00:02:33.465 LINK spdk_nvme 00:02:33.465 LINK vhost_fuzz 00:02:33.465 LINK spdk_nvme_identify 00:02:33.465 LINK spdk_nvme_perf 00:02:33.465 LINK memory_ut 00:02:33.465 LINK mem_callbacks 00:02:33.727 LINK bdevperf 00:02:33.727 LINK spdk_top 00:02:33.727 LINK cuse 00:02:34.299 LINK iscsi_fuzz 00:02:36.846 LINK esnap 00:02:37.108 00:02:37.108 real 0m45.850s 00:02:37.108 user 6m12.783s 00:02:37.108 sys 3m56.721s 00:02:37.108 22:29:21 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:37.108 22:29:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.108 ************************************ 00:02:37.108 END TEST make 00:02:37.108 ************************************ 00:02:37.108 22:29:21 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:37.108 22:29:21 -- nvmf/common.sh@7 -- # uname -s 00:02:37.108 22:29:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:37.108 22:29:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:37.108 22:29:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:37.108 22:29:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:37.108 22:29:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:37.108 22:29:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:37.108 22:29:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:37.108 22:29:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:37.108 22:29:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:37.108 22:29:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:37.108 22:29:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:37.108 22:29:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:37.108 22:29:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:37.108 22:29:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:37.108 22:29:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:37.108 22:29:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:37.108 22:29:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:37.108 22:29:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:37.108 22:29:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:37.108 22:29:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.108 22:29:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.108 22:29:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.108 22:29:21 -- paths/export.sh@5 -- # export PATH 00:02:37.108 22:29:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.108 22:29:21 -- nvmf/common.sh@46 -- # : 0 00:02:37.108 22:29:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:37.108 22:29:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:37.108 22:29:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:37.108 22:29:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:37.108 22:29:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:37.108 22:29:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:37.108 22:29:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:37.108 22:29:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:37.370 22:29:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:37.370 22:29:21 -- spdk/autotest.sh@32 -- # uname -s 00:02:37.370 22:29:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:37.370 22:29:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:37.370 22:29:21 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:37.370 22:29:21 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:37.370 22:29:21 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:37.370 22:29:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:37.370 22:29:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:37.370 22:29:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:37.370 22:29:21 -- spdk/autotest.sh@48 -- # udevadm_pid=826872 00:02:37.370 22:29:21 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:37.370 22:29:21 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:37.370 22:29:21 -- spdk/autotest.sh@54 -- # echo 826874 00:02:37.370 22:29:21 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:37.370 22:29:21 -- spdk/autotest.sh@56 -- # echo 826875 00:02:37.370 22:29:21 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:37.370 22:29:21 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:37.370 22:29:21 -- spdk/autotest.sh@60 -- # echo 826876 00:02:37.370 22:29:21 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:37.370 22:29:21 -- spdk/autotest.sh@62 -- # echo 826877 00:02:37.370 22:29:21 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:37.370 22:29:21 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:37.370 22:29:21 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:37.370 22:29:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:37.370 22:29:21 -- common/autotest_common.sh@10 -- # set +x 00:02:37.370 22:29:21 -- spdk/autotest.sh@70 -- # create_test_list 00:02:37.370 22:29:21 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:37.370 22:29:21 -- common/autotest_common.sh@10 -- # set +x 00:02:37.370 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:37.370 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:37.371 22:29:21 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:37.371 22:29:22 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:37.371 22:29:22 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:37.371 22:29:22 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:37.371 22:29:22 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:37.371 22:29:22 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:37.371 22:29:22 -- common/autotest_common.sh@1440 -- # uname 00:02:37.371 22:29:22 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:37.371 22:29:22 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:37.371 22:29:22 -- common/autotest_common.sh@1460 -- # uname 00:02:37.371 22:29:22 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:37.371 22:29:22 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:37.371 22:29:22 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:37.371 22:29:22 -- spdk/autotest.sh@83 -- # hash lcov 00:02:37.371 22:29:22 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:37.371 22:29:22 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:37.371 --rc lcov_branch_coverage=1 00:02:37.371 --rc lcov_function_coverage=1 00:02:37.371 --rc genhtml_branch_coverage=1 00:02:37.371 --rc genhtml_function_coverage=1 00:02:37.371 --rc genhtml_legend=1 00:02:37.371 --rc geninfo_all_blocks=1 00:02:37.371 ' 00:02:37.371 22:29:22 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:37.371 --rc lcov_branch_coverage=1 00:02:37.371 --rc lcov_function_coverage=1 00:02:37.371 --rc genhtml_branch_coverage=1 00:02:37.371 --rc genhtml_function_coverage=1 00:02:37.371 --rc genhtml_legend=1 00:02:37.371 --rc geninfo_all_blocks=1 00:02:37.371 ' 00:02:37.371 22:29:22 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:37.371 --rc lcov_branch_coverage=1 00:02:37.371 --rc lcov_function_coverage=1 00:02:37.371 --rc genhtml_branch_coverage=1 00:02:37.371 --rc genhtml_function_coverage=1 00:02:37.371 --rc genhtml_legend=1 00:02:37.371 --rc geninfo_all_blocks=1 00:02:37.371 --no-external' 00:02:37.371 22:29:22 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:37.371 --rc lcov_branch_coverage=1 00:02:37.371 --rc lcov_function_coverage=1 00:02:37.371 --rc genhtml_branch_coverage=1 00:02:37.371 --rc genhtml_function_coverage=1 00:02:37.371 --rc genhtml_legend=1 00:02:37.371 --rc geninfo_all_blocks=1 00:02:37.371 --no-external' 00:02:37.371 22:29:22 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:37.371 lcov: LCOV version 1.14 00:02:37.371 22:29:22 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:49.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:49.618 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:49.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:49.618 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:49.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:49.618 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:04.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:04.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:04.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:04.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:04.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:04.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:04.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:04.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:04.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:04.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:04.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:04.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:04.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:04.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:04.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:04.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:04.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:04.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:04.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:04.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:04.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:04.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:04.569 22:29:49 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:04.569 22:29:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:04.569 22:29:49 -- common/autotest_common.sh@10 -- # set +x 00:03:04.569 22:29:49 -- spdk/autotest.sh@102 -- # rm -f 00:03:04.569 22:29:49 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.786 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:08.787 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:08.787 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:08.787 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:08.787 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:08.787 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:08.787 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:08.787 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:08.787 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:08.787 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:08.787 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:08.787 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:08.787 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:08.787 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:08.787 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:08.787 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:08.787 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:08.787 22:29:53 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:08.787 22:29:53 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:08.787 22:29:53 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:08.787 22:29:53 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:08.787 22:29:53 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:08.787 22:29:53 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:08.787 22:29:53 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:08.787 22:29:53 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:08.787 22:29:53 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:08.787 22:29:53 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:08.787 22:29:53 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:08.787 22:29:53 -- spdk/autotest.sh@121 -- # grep -v p 00:03:08.787 22:29:53 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:08.787 22:29:53 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:08.787 22:29:53 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:08.787 22:29:53 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:08.787 22:29:53 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:09.064 No valid GPT data, bailing 00:03:09.064 22:29:53 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:09.064 22:29:53 -- scripts/common.sh@393 -- # pt= 00:03:09.064 22:29:53 -- scripts/common.sh@394 -- # return 1 00:03:09.064 22:29:53 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:09.064 1+0 records in 00:03:09.064 1+0 records out 00:03:09.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00463555 s, 226 MB/s 00:03:09.064 22:29:53 -- spdk/autotest.sh@129 -- # sync 00:03:09.064 22:29:53 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:09.064 22:29:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:09.064 22:29:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:17.207 22:30:01 -- spdk/autotest.sh@135 -- # uname -s 00:03:17.207 22:30:01 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:17.207 22:30:01 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:17.207 22:30:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:17.207 22:30:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:17.207 22:30:01 -- common/autotest_common.sh@10 -- # set +x 00:03:17.207 ************************************ 00:03:17.207 START TEST setup.sh 00:03:17.207 ************************************ 00:03:17.207 22:30:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:17.207 * Looking for test storage... 00:03:17.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:17.207 22:30:01 -- setup/test-setup.sh@10 -- # uname -s 00:03:17.207 22:30:01 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:17.207 22:30:01 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:17.207 22:30:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:17.207 22:30:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:17.207 22:30:01 -- common/autotest_common.sh@10 -- # set +x 00:03:17.207 ************************************ 00:03:17.207 START TEST acl 00:03:17.207 ************************************ 00:03:17.207 22:30:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:17.207 * Looking for test storage... 00:03:17.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:17.207 22:30:01 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:17.207 22:30:01 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:17.207 22:30:01 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:17.207 22:30:01 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:17.207 22:30:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:17.207 22:30:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:17.207 22:30:01 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:17.207 22:30:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:17.207 22:30:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:17.207 22:30:01 -- setup/acl.sh@12 -- # devs=() 00:03:17.207 22:30:01 -- setup/acl.sh@12 -- # declare -a devs 00:03:17.207 22:30:01 -- setup/acl.sh@13 -- # drivers=() 00:03:17.207 22:30:01 -- setup/acl.sh@13 -- # declare -A drivers 00:03:17.207 22:30:01 -- setup/acl.sh@51 -- # setup reset 00:03:17.207 22:30:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:17.207 22:30:01 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.417 22:30:05 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:21.417 22:30:05 -- setup/acl.sh@16 -- # local dev driver 00:03:21.417 22:30:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.417 22:30:05 -- setup/acl.sh@15 -- # setup output status 00:03:21.417 22:30:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.417 22:30:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:24.723 Hugepages 00:03:24.723 node hugesize free / total 00:03:24.723 22:30:09 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:24.723 22:30:09 -- setup/acl.sh@19 -- # continue 00:03:24.723 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.723 22:30:09 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:24.723 22:30:09 -- setup/acl.sh@19 -- # continue 00:03:24.723 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.723 22:30:09 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:24.723 22:30:09 -- setup/acl.sh@19 -- # continue 00:03:24.723 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.723 00:03:24.723 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:24.723 22:30:09 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:24.723 22:30:09 -- setup/acl.sh@19 -- # continue 00:03:24.723 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.723 22:30:09 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:24.723 22:30:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.723 22:30:09 -- setup/acl.sh@20 -- # continue 00:03:24.723 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.723 22:30:09 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:24.723 22:30:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.723 22:30:09 -- setup/acl.sh@20 -- # continue 00:03:24.723 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.723 22:30:09 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:24.723 22:30:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.723 22:30:09 -- setup/acl.sh@20 -- # continue 00:03:24.723 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.723 22:30:09 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:24.723 22:30:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.723 22:30:09 -- setup/acl.sh@20 -- # continue 00:03:24.723 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.723 22:30:09 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:24.723 22:30:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.723 22:30:09 -- setup/acl.sh@20 -- # continue 00:03:24.723 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.724 22:30:09 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:24.724 22:30:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.724 22:30:09 -- setup/acl.sh@20 -- # continue 00:03:24.724 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.724 22:30:09 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:24.724 22:30:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.724 22:30:09 -- setup/acl.sh@20 -- # continue 00:03:24.724 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.724 22:30:09 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:24.724 22:30:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.724 22:30:09 -- setup/acl.sh@20 -- # continue 00:03:24.724 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.985 22:30:09 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:24.985 22:30:09 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:24.985 22:30:09 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:24.985 22:30:09 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:24.985 22:30:09 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:24.985 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.985 22:30:09 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:24.985 22:30:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.985 22:30:09 -- setup/acl.sh@20 -- # continue 00:03:24.985 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.985 22:30:09 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:24.985 22:30:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.985 22:30:09 -- setup/acl.sh@20 -- # continue 00:03:24.985 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.985 22:30:09 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:24.985 22:30:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.985 22:30:09 -- setup/acl.sh@20 -- # continue 00:03:24.985 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.985 22:30:09 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:24.985 22:30:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.985 22:30:09 -- setup/acl.sh@20 -- # continue 00:03:24.985 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.985 22:30:09 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:24.985 22:30:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.985 22:30:09 -- setup/acl.sh@20 -- # continue 00:03:24.985 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.985 22:30:09 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:24.985 22:30:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.985 22:30:09 -- setup/acl.sh@20 -- # continue 00:03:24.985 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.985 22:30:09 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:24.985 22:30:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.985 22:30:09 -- setup/acl.sh@20 -- # continue 00:03:24.985 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.985 22:30:09 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:24.985 22:30:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.985 22:30:09 -- setup/acl.sh@20 -- # continue 00:03:24.985 22:30:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.985 22:30:09 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:24.985 22:30:09 -- setup/acl.sh@54 -- # run_test denied denied 00:03:24.985 22:30:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:24.985 22:30:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:24.985 22:30:09 -- common/autotest_common.sh@10 -- # set +x 00:03:24.985 ************************************ 00:03:24.985 START TEST denied 00:03:24.985 ************************************ 00:03:24.985 22:30:09 -- common/autotest_common.sh@1104 -- # denied 00:03:24.985 22:30:09 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:24.985 22:30:09 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:24.985 22:30:09 -- setup/acl.sh@38 -- # setup output config 00:03:24.985 22:30:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.985 22:30:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:29.193 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:29.193 22:30:13 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:29.193 22:30:13 -- setup/acl.sh@28 -- # local dev driver 00:03:29.193 22:30:13 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:29.193 22:30:13 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:29.193 22:30:13 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:29.193 22:30:13 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:29.193 22:30:13 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:29.193 22:30:13 -- setup/acl.sh@41 -- # setup reset 00:03:29.193 22:30:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.193 22:30:13 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.507 00:03:34.507 real 0m9.481s 00:03:34.507 user 0m3.178s 00:03:34.507 sys 0m5.589s 00:03:34.507 22:30:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.507 22:30:19 -- common/autotest_common.sh@10 -- # set +x 00:03:34.507 ************************************ 00:03:34.507 END TEST denied 00:03:34.507 ************************************ 00:03:34.507 22:30:19 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:34.507 22:30:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:34.507 22:30:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:34.507 22:30:19 -- common/autotest_common.sh@10 -- # set +x 00:03:34.507 ************************************ 00:03:34.507 START TEST allowed 00:03:34.507 ************************************ 00:03:34.507 22:30:19 -- common/autotest_common.sh@1104 -- # allowed 00:03:34.507 22:30:19 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:34.507 22:30:19 -- setup/acl.sh@45 -- # setup output config 00:03:34.507 22:30:19 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:34.507 22:30:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.507 22:30:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:41.096 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:41.096 22:30:25 -- setup/acl.sh@47 -- # verify 00:03:41.096 22:30:25 -- setup/acl.sh@28 -- # local dev driver 00:03:41.096 22:30:25 -- setup/acl.sh@48 -- # setup reset 00:03:41.096 22:30:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.096 22:30:25 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.306 00:03:45.306 real 0m10.191s 00:03:45.306 user 0m3.077s 00:03:45.306 sys 0m5.376s 00:03:45.306 22:30:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.306 22:30:29 -- common/autotest_common.sh@10 -- # set +x 00:03:45.306 ************************************ 00:03:45.306 END TEST allowed 00:03:45.306 ************************************ 00:03:45.306 00:03:45.306 real 0m27.845s 00:03:45.306 user 0m9.195s 00:03:45.306 sys 0m16.361s 00:03:45.306 22:30:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.306 22:30:29 -- common/autotest_common.sh@10 -- # set +x 00:03:45.306 ************************************ 00:03:45.306 END TEST acl 00:03:45.306 ************************************ 00:03:45.306 22:30:29 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:45.306 22:30:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:45.306 22:30:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:45.306 22:30:29 -- common/autotest_common.sh@10 -- # set +x 00:03:45.306 ************************************ 00:03:45.306 START TEST hugepages 00:03:45.306 ************************************ 00:03:45.306 22:30:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:45.306 * Looking for test storage... 00:03:45.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:45.306 22:30:29 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:45.306 22:30:29 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:45.306 22:30:29 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:45.306 22:30:29 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:45.306 22:30:29 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:45.306 22:30:29 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:45.306 22:30:29 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:45.306 22:30:29 -- setup/common.sh@18 -- # local node= 00:03:45.306 22:30:29 -- setup/common.sh@19 -- # local var val 00:03:45.306 22:30:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.306 22:30:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.306 22:30:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.306 22:30:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.306 22:30:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.306 22:30:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 107295492 kB' 'MemAvailable: 107679228 kB' 'Buffers: 2736 kB' 'Cached: 15401476 kB' 'SwapCached: 0 kB' 'Active: 15627624 kB' 'Inactive: 371152 kB' 'Active(anon): 14944316 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597972 kB' 'Mapped: 199696 kB' 'Shmem: 14349752 kB' 'KReclaimable: 337744 kB' 'Slab: 1202660 kB' 'SReclaimable: 337744 kB' 'SUnreclaim: 864916 kB' 'KernelStack: 27536 kB' 'PageTables: 9440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 72040740 kB' 'Committed_AS: 16402024 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237364 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.306 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.306 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # continue 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.307 22:30:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.307 22:30:29 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.307 22:30:29 -- setup/common.sh@33 -- # echo 2048 00:03:45.307 22:30:29 -- setup/common.sh@33 -- # return 0 00:03:45.307 22:30:29 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:45.307 22:30:29 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:45.307 22:30:29 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:45.307 22:30:29 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:45.307 22:30:29 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:45.307 22:30:29 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:45.307 22:30:29 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:45.307 22:30:29 -- setup/hugepages.sh@207 -- # get_nodes 00:03:45.307 22:30:29 -- setup/hugepages.sh@27 -- # local node 00:03:45.307 22:30:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.307 22:30:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:45.307 22:30:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.307 22:30:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:45.307 22:30:29 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.307 22:30:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.307 22:30:29 -- setup/hugepages.sh@208 -- # clear_hp 00:03:45.307 22:30:29 -- setup/hugepages.sh@37 -- # local node hp 00:03:45.307 22:30:29 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.307 22:30:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.307 22:30:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.307 22:30:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.307 22:30:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.307 22:30:29 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.307 22:30:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.307 22:30:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.307 22:30:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.307 22:30:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.307 22:30:29 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:45.307 22:30:29 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:45.307 22:30:29 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:45.308 22:30:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:45.308 22:30:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:45.308 22:30:29 -- common/autotest_common.sh@10 -- # set +x 00:03:45.308 ************************************ 00:03:45.308 START TEST default_setup 00:03:45.308 ************************************ 00:03:45.308 22:30:29 -- common/autotest_common.sh@1104 -- # default_setup 00:03:45.308 22:30:29 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:45.308 22:30:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.308 22:30:29 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:45.308 22:30:29 -- setup/hugepages.sh@51 -- # shift 00:03:45.308 22:30:29 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:45.308 22:30:29 -- setup/hugepages.sh@52 -- # local node_ids 00:03:45.308 22:30:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.308 22:30:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.308 22:30:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:45.308 22:30:29 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:45.308 22:30:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.308 22:30:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.308 22:30:29 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.308 22:30:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.308 22:30:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.308 22:30:29 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:45.308 22:30:29 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.308 22:30:29 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:45.308 22:30:29 -- setup/hugepages.sh@73 -- # return 0 00:03:45.308 22:30:29 -- setup/hugepages.sh@137 -- # setup output 00:03:45.308 22:30:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.308 22:30:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:48.609 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:48.609 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:48.609 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:48.609 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:48.609 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:48.609 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:48.609 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:48.609 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:48.609 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:48.609 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:48.609 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:48.609 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:48.609 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:48.609 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:48.609 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:48.609 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:48.609 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:49.185 22:30:33 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:49.185 22:30:33 -- setup/hugepages.sh@89 -- # local node 00:03:49.185 22:30:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.185 22:30:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.185 22:30:33 -- setup/hugepages.sh@92 -- # local surp 00:03:49.185 22:30:33 -- setup/hugepages.sh@93 -- # local resv 00:03:49.185 22:30:33 -- setup/hugepages.sh@94 -- # local anon 00:03:49.185 22:30:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.185 22:30:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.185 22:30:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.185 22:30:33 -- setup/common.sh@18 -- # local node= 00:03:49.185 22:30:33 -- setup/common.sh@19 -- # local var val 00:03:49.185 22:30:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.185 22:30:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.185 22:30:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.185 22:30:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.185 22:30:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.185 22:30:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.185 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.185 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.185 22:30:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109496604 kB' 'MemAvailable: 109879924 kB' 'Buffers: 2736 kB' 'Cached: 15401604 kB' 'SwapCached: 0 kB' 'Active: 15643972 kB' 'Inactive: 371152 kB' 'Active(anon): 14960664 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614040 kB' 'Mapped: 199908 kB' 'Shmem: 14349880 kB' 'KReclaimable: 336912 kB' 'Slab: 1199888 kB' 'SReclaimable: 336912 kB' 'SUnreclaim: 862976 kB' 'KernelStack: 27712 kB' 'PageTables: 9912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16421996 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237540 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:03:49.185 22:30:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.185 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.186 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.186 22:30:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.186 22:30:33 -- setup/common.sh@33 -- # echo 0 00:03:49.186 22:30:33 -- setup/common.sh@33 -- # return 0 00:03:49.186 22:30:33 -- setup/hugepages.sh@97 -- # anon=0 00:03:49.186 22:30:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.186 22:30:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.186 22:30:33 -- setup/common.sh@18 -- # local node= 00:03:49.186 22:30:33 -- setup/common.sh@19 -- # local var val 00:03:49.186 22:30:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.187 22:30:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.187 22:30:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.187 22:30:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.187 22:30:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.187 22:30:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109496720 kB' 'MemAvailable: 109880040 kB' 'Buffers: 2736 kB' 'Cached: 15401612 kB' 'SwapCached: 0 kB' 'Active: 15644188 kB' 'Inactive: 371152 kB' 'Active(anon): 14960880 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614780 kB' 'Mapped: 199960 kB' 'Shmem: 14349888 kB' 'KReclaimable: 336912 kB' 'Slab: 1199944 kB' 'SReclaimable: 336912 kB' 'SUnreclaim: 863032 kB' 'KernelStack: 27776 kB' 'PageTables: 9824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16423532 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237588 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.187 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.187 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.188 22:30:33 -- setup/common.sh@33 -- # echo 0 00:03:49.188 22:30:33 -- setup/common.sh@33 -- # return 0 00:03:49.188 22:30:33 -- setup/hugepages.sh@99 -- # surp=0 00:03:49.188 22:30:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:49.188 22:30:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.188 22:30:33 -- setup/common.sh@18 -- # local node= 00:03:49.188 22:30:33 -- setup/common.sh@19 -- # local var val 00:03:49.188 22:30:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.188 22:30:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.188 22:30:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.188 22:30:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.188 22:30:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.188 22:30:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109495288 kB' 'MemAvailable: 109878608 kB' 'Buffers: 2736 kB' 'Cached: 15401624 kB' 'SwapCached: 0 kB' 'Active: 15644296 kB' 'Inactive: 371152 kB' 'Active(anon): 14960988 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614368 kB' 'Mapped: 199880 kB' 'Shmem: 14349900 kB' 'KReclaimable: 336912 kB' 'Slab: 1199340 kB' 'SReclaimable: 336912 kB' 'SUnreclaim: 862428 kB' 'KernelStack: 27792 kB' 'PageTables: 9716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16423548 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237556 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.188 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.188 22:30:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.189 22:30:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.189 22:30:33 -- setup/common.sh@33 -- # echo 0 00:03:49.189 22:30:33 -- setup/common.sh@33 -- # return 0 00:03:49.189 22:30:33 -- setup/hugepages.sh@100 -- # resv=0 00:03:49.189 22:30:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:49.189 nr_hugepages=1024 00:03:49.189 22:30:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:49.189 resv_hugepages=0 00:03:49.189 22:30:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:49.189 surplus_hugepages=0 00:03:49.189 22:30:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:49.189 anon_hugepages=0 00:03:49.189 22:30:33 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.189 22:30:33 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:49.189 22:30:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:49.189 22:30:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.189 22:30:33 -- setup/common.sh@18 -- # local node= 00:03:49.189 22:30:33 -- setup/common.sh@19 -- # local var val 00:03:49.189 22:30:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.189 22:30:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.189 22:30:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.189 22:30:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.189 22:30:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.189 22:30:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.189 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109495056 kB' 'MemAvailable: 109878376 kB' 'Buffers: 2736 kB' 'Cached: 15401624 kB' 'SwapCached: 0 kB' 'Active: 15644464 kB' 'Inactive: 371152 kB' 'Active(anon): 14961156 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614512 kB' 'Mapped: 199880 kB' 'Shmem: 14349900 kB' 'KReclaimable: 336912 kB' 'Slab: 1199340 kB' 'SReclaimable: 336912 kB' 'SUnreclaim: 862428 kB' 'KernelStack: 27760 kB' 'PageTables: 9740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16423564 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237588 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.190 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.190 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.191 22:30:33 -- setup/common.sh@33 -- # echo 1024 00:03:49.191 22:30:33 -- setup/common.sh@33 -- # return 0 00:03:49.191 22:30:33 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.191 22:30:33 -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.191 22:30:33 -- setup/hugepages.sh@27 -- # local node 00:03:49.191 22:30:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.191 22:30:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:49.191 22:30:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.191 22:30:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:49.191 22:30:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:49.191 22:30:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.191 22:30:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.191 22:30:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.191 22:30:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.191 22:30:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.191 22:30:33 -- setup/common.sh@18 -- # local node=0 00:03:49.191 22:30:33 -- setup/common.sh@19 -- # local var val 00:03:49.191 22:30:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.191 22:30:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.191 22:30:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.191 22:30:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.191 22:30:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.191 22:30:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65610712 kB' 'MemFree: 53635872 kB' 'MemUsed: 11974840 kB' 'SwapCached: 0 kB' 'Active: 7504452 kB' 'Inactive: 116460 kB' 'Active(anon): 7131556 kB' 'Inactive(anon): 0 kB' 'Active(file): 372896 kB' 'Inactive(file): 116460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7269184 kB' 'Mapped: 114120 kB' 'AnonPages: 354940 kB' 'Shmem: 6779828 kB' 'KernelStack: 15720 kB' 'PageTables: 5048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 217424 kB' 'Slab: 670796 kB' 'SReclaimable: 217424 kB' 'SUnreclaim: 453372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.191 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # continue 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 22:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 22:30:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 22:30:33 -- setup/common.sh@33 -- # echo 0 00:03:49.192 22:30:33 -- setup/common.sh@33 -- # return 0 00:03:49.192 22:30:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.192 22:30:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.192 22:30:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.192 22:30:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.192 22:30:33 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:49.192 node0=1024 expecting 1024 00:03:49.192 22:30:33 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:49.192 00:03:49.192 real 0m4.263s 00:03:49.192 user 0m1.493s 00:03:49.192 sys 0m2.695s 00:03:49.192 22:30:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.192 22:30:33 -- common/autotest_common.sh@10 -- # set +x 00:03:49.192 ************************************ 00:03:49.192 END TEST default_setup 00:03:49.192 ************************************ 00:03:49.192 22:30:33 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:49.192 22:30:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:49.192 22:30:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:49.192 22:30:33 -- common/autotest_common.sh@10 -- # set +x 00:03:49.192 ************************************ 00:03:49.192 START TEST per_node_1G_alloc 00:03:49.192 ************************************ 00:03:49.192 22:30:33 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:49.192 22:30:33 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:49.192 22:30:33 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:49.192 22:30:33 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:49.192 22:30:33 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:49.192 22:30:33 -- setup/hugepages.sh@51 -- # shift 00:03:49.192 22:30:33 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:49.192 22:30:33 -- setup/hugepages.sh@52 -- # local node_ids 00:03:49.192 22:30:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:49.192 22:30:33 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:49.192 22:30:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:49.192 22:30:33 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:49.192 22:30:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:49.192 22:30:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:49.192 22:30:33 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:49.192 22:30:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:49.192 22:30:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:49.192 22:30:33 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:49.192 22:30:33 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:49.192 22:30:33 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:49.192 22:30:33 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:49.192 22:30:33 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:49.192 22:30:33 -- setup/hugepages.sh@73 -- # return 0 00:03:49.192 22:30:33 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:49.192 22:30:33 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:49.192 22:30:33 -- setup/hugepages.sh@146 -- # setup output 00:03:49.192 22:30:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.192 22:30:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:53.408 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:53.408 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:53.408 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:53.408 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:53.408 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:53.408 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:53.408 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:53.408 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:53.408 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:53.408 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:53.408 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:53.408 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:53.408 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:53.408 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:53.408 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:53.408 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:53.408 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:53.408 22:30:37 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:53.408 22:30:37 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:53.408 22:30:37 -- setup/hugepages.sh@89 -- # local node 00:03:53.408 22:30:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:53.408 22:30:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:53.408 22:30:37 -- setup/hugepages.sh@92 -- # local surp 00:03:53.408 22:30:37 -- setup/hugepages.sh@93 -- # local resv 00:03:53.408 22:30:37 -- setup/hugepages.sh@94 -- # local anon 00:03:53.408 22:30:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:53.408 22:30:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:53.408 22:30:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:53.408 22:30:37 -- setup/common.sh@18 -- # local node= 00:03:53.408 22:30:37 -- setup/common.sh@19 -- # local var val 00:03:53.408 22:30:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.408 22:30:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.408 22:30:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.408 22:30:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.408 22:30:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.408 22:30:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.408 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.408 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.408 22:30:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109502096 kB' 'MemAvailable: 109885416 kB' 'Buffers: 2736 kB' 'Cached: 15401760 kB' 'SwapCached: 0 kB' 'Active: 15641776 kB' 'Inactive: 371152 kB' 'Active(anon): 14958468 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 611816 kB' 'Mapped: 198768 kB' 'Shmem: 14350036 kB' 'KReclaimable: 336912 kB' 'Slab: 1199404 kB' 'SReclaimable: 336912 kB' 'SUnreclaim: 862492 kB' 'KernelStack: 27552 kB' 'PageTables: 9204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16406812 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237508 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:03:53.408 22:30:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.408 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.408 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.408 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.408 22:30:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.408 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.408 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.408 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.409 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.409 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.410 22:30:37 -- setup/common.sh@33 -- # echo 0 00:03:53.410 22:30:37 -- setup/common.sh@33 -- # return 0 00:03:53.410 22:30:37 -- setup/hugepages.sh@97 -- # anon=0 00:03:53.410 22:30:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:53.410 22:30:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.410 22:30:37 -- setup/common.sh@18 -- # local node= 00:03:53.410 22:30:37 -- setup/common.sh@19 -- # local var val 00:03:53.410 22:30:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.410 22:30:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.410 22:30:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.410 22:30:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.410 22:30:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.410 22:30:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109501664 kB' 'MemAvailable: 109884984 kB' 'Buffers: 2736 kB' 'Cached: 15401760 kB' 'SwapCached: 0 kB' 'Active: 15641832 kB' 'Inactive: 371152 kB' 'Active(anon): 14958524 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 611884 kB' 'Mapped: 198736 kB' 'Shmem: 14350036 kB' 'KReclaimable: 336912 kB' 'Slab: 1199388 kB' 'SReclaimable: 336912 kB' 'SUnreclaim: 862476 kB' 'KernelStack: 27520 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16406824 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237492 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:37 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.410 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.410 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.411 22:30:38 -- setup/common.sh@33 -- # echo 0 00:03:53.411 22:30:38 -- setup/common.sh@33 -- # return 0 00:03:53.411 22:30:38 -- setup/hugepages.sh@99 -- # surp=0 00:03:53.411 22:30:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:53.411 22:30:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.411 22:30:38 -- setup/common.sh@18 -- # local node= 00:03:53.411 22:30:38 -- setup/common.sh@19 -- # local var val 00:03:53.411 22:30:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.411 22:30:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.411 22:30:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.411 22:30:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.411 22:30:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.411 22:30:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109502492 kB' 'MemAvailable: 109885812 kB' 'Buffers: 2736 kB' 'Cached: 15401772 kB' 'SwapCached: 0 kB' 'Active: 15641864 kB' 'Inactive: 371152 kB' 'Active(anon): 14958556 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 611888 kB' 'Mapped: 198736 kB' 'Shmem: 14350048 kB' 'KReclaimable: 336912 kB' 'Slab: 1199388 kB' 'SReclaimable: 336912 kB' 'SUnreclaim: 862476 kB' 'KernelStack: 27520 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16406836 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237492 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.411 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.411 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.412 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.412 22:30:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.413 22:30:38 -- setup/common.sh@33 -- # echo 0 00:03:53.413 22:30:38 -- setup/common.sh@33 -- # return 0 00:03:53.413 22:30:38 -- setup/hugepages.sh@100 -- # resv=0 00:03:53.413 22:30:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:53.413 nr_hugepages=1024 00:03:53.413 22:30:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:53.413 resv_hugepages=0 00:03:53.413 22:30:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:53.413 surplus_hugepages=0 00:03:53.413 22:30:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:53.413 anon_hugepages=0 00:03:53.413 22:30:38 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.413 22:30:38 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:53.413 22:30:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:53.413 22:30:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:53.413 22:30:38 -- setup/common.sh@18 -- # local node= 00:03:53.413 22:30:38 -- setup/common.sh@19 -- # local var val 00:03:53.413 22:30:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.413 22:30:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.413 22:30:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.413 22:30:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.413 22:30:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.413 22:30:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.413 22:30:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109502492 kB' 'MemAvailable: 109885812 kB' 'Buffers: 2736 kB' 'Cached: 15401788 kB' 'SwapCached: 0 kB' 'Active: 15641756 kB' 'Inactive: 371152 kB' 'Active(anon): 14958448 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 611748 kB' 'Mapped: 198736 kB' 'Shmem: 14350064 kB' 'KReclaimable: 336912 kB' 'Slab: 1199388 kB' 'SReclaimable: 336912 kB' 'SUnreclaim: 862476 kB' 'KernelStack: 27504 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16406852 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237492 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.413 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.413 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.414 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.414 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.415 22:30:38 -- setup/common.sh@33 -- # echo 1024 00:03:53.415 22:30:38 -- setup/common.sh@33 -- # return 0 00:03:53.415 22:30:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.415 22:30:38 -- setup/hugepages.sh@112 -- # get_nodes 00:03:53.415 22:30:38 -- setup/hugepages.sh@27 -- # local node 00:03:53.415 22:30:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.415 22:30:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:53.415 22:30:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.415 22:30:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:53.415 22:30:38 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:53.415 22:30:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.415 22:30:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.415 22:30:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.415 22:30:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:53.415 22:30:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.415 22:30:38 -- setup/common.sh@18 -- # local node=0 00:03:53.415 22:30:38 -- setup/common.sh@19 -- # local var val 00:03:53.415 22:30:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.415 22:30:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.415 22:30:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.415 22:30:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.415 22:30:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.415 22:30:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65610712 kB' 'MemFree: 54671504 kB' 'MemUsed: 10939208 kB' 'SwapCached: 0 kB' 'Active: 7503524 kB' 'Inactive: 116460 kB' 'Active(anon): 7130628 kB' 'Inactive(anon): 0 kB' 'Active(file): 372896 kB' 'Inactive(file): 116460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7269272 kB' 'Mapped: 114128 kB' 'AnonPages: 353888 kB' 'Shmem: 6779916 kB' 'KernelStack: 15640 kB' 'PageTables: 4912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 217424 kB' 'Slab: 670608 kB' 'SReclaimable: 217424 kB' 'SUnreclaim: 453184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.415 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.415 22:30:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@33 -- # echo 0 00:03:53.416 22:30:38 -- setup/common.sh@33 -- # return 0 00:03:53.416 22:30:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.416 22:30:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.416 22:30:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.416 22:30:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:53.416 22:30:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.416 22:30:38 -- setup/common.sh@18 -- # local node=1 00:03:53.416 22:30:38 -- setup/common.sh@19 -- # local var val 00:03:53.416 22:30:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.416 22:30:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.416 22:30:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:53.416 22:30:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:53.416 22:30:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.416 22:30:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65887864 kB' 'MemFree: 54830744 kB' 'MemUsed: 11057120 kB' 'SwapCached: 0 kB' 'Active: 8138352 kB' 'Inactive: 254692 kB' 'Active(anon): 7827940 kB' 'Inactive(anon): 0 kB' 'Active(file): 310412 kB' 'Inactive(file): 254692 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8135276 kB' 'Mapped: 84608 kB' 'AnonPages: 257936 kB' 'Shmem: 7570172 kB' 'KernelStack: 11864 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119488 kB' 'Slab: 528780 kB' 'SReclaimable: 119488 kB' 'SUnreclaim: 409292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.416 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.416 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.417 22:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.417 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.417 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.417 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.417 22:30:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.417 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.417 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.417 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.417 22:30:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.417 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.417 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.417 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.417 22:30:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.417 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.417 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.417 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.417 22:30:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.417 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.417 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.417 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.417 22:30:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.417 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.417 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.417 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.417 22:30:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.417 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.417 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.417 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.417 22:30:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.417 22:30:38 -- setup/common.sh@32 -- # continue 00:03:53.417 22:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.417 22:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.417 22:30:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.417 22:30:38 -- setup/common.sh@33 -- # echo 0 00:03:53.417 22:30:38 -- setup/common.sh@33 -- # return 0 00:03:53.417 22:30:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.417 22:30:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.417 22:30:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.417 22:30:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.417 22:30:38 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:53.417 node0=512 expecting 512 00:03:53.417 22:30:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.417 22:30:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.417 22:30:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.417 22:30:38 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:53.417 node1=512 expecting 512 00:03:53.417 22:30:38 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:53.417 00:03:53.417 real 0m4.280s 00:03:53.417 user 0m1.699s 00:03:53.417 sys 0m2.640s 00:03:53.417 22:30:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.417 22:30:38 -- common/autotest_common.sh@10 -- # set +x 00:03:53.417 ************************************ 00:03:53.417 END TEST per_node_1G_alloc 00:03:53.417 ************************************ 00:03:53.417 22:30:38 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:53.417 22:30:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:53.417 22:30:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:53.417 22:30:38 -- common/autotest_common.sh@10 -- # set +x 00:03:53.417 ************************************ 00:03:53.417 START TEST even_2G_alloc 00:03:53.417 ************************************ 00:03:53.417 22:30:38 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:53.417 22:30:38 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:53.417 22:30:38 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:53.417 22:30:38 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:53.417 22:30:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:53.417 22:30:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:53.417 22:30:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:53.417 22:30:38 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:53.417 22:30:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.417 22:30:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:53.417 22:30:38 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:53.417 22:30:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.417 22:30:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:53.417 22:30:38 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:53.417 22:30:38 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:53.417 22:30:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.417 22:30:38 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:53.417 22:30:38 -- setup/hugepages.sh@83 -- # : 512 00:03:53.417 22:30:38 -- setup/hugepages.sh@84 -- # : 1 00:03:53.417 22:30:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.417 22:30:38 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:53.417 22:30:38 -- setup/hugepages.sh@83 -- # : 0 00:03:53.417 22:30:38 -- setup/hugepages.sh@84 -- # : 0 00:03:53.417 22:30:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.678 22:30:38 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:53.678 22:30:38 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:53.678 22:30:38 -- setup/hugepages.sh@153 -- # setup output 00:03:53.678 22:30:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.678 22:30:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:57.894 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:57.894 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:57.894 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:57.894 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:57.894 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:57.894 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:57.894 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:57.894 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:57.894 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:57.894 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:57.894 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:57.894 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:57.894 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:57.894 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:57.894 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:57.894 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:57.894 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:57.894 22:30:42 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:57.894 22:30:42 -- setup/hugepages.sh@89 -- # local node 00:03:57.894 22:30:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.894 22:30:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.894 22:30:42 -- setup/hugepages.sh@92 -- # local surp 00:03:57.894 22:30:42 -- setup/hugepages.sh@93 -- # local resv 00:03:57.894 22:30:42 -- setup/hugepages.sh@94 -- # local anon 00:03:57.894 22:30:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.894 22:30:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.894 22:30:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.894 22:30:42 -- setup/common.sh@18 -- # local node= 00:03:57.894 22:30:42 -- setup/common.sh@19 -- # local var val 00:03:57.894 22:30:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.894 22:30:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.894 22:30:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.894 22:30:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.894 22:30:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.894 22:30:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.894 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.894 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.894 22:30:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109573800 kB' 'MemAvailable: 109957120 kB' 'Buffers: 2736 kB' 'Cached: 15401908 kB' 'SwapCached: 0 kB' 'Active: 15644952 kB' 'Inactive: 371152 kB' 'Active(anon): 14961644 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614332 kB' 'Mapped: 198912 kB' 'Shmem: 14350184 kB' 'KReclaimable: 336912 kB' 'Slab: 1199412 kB' 'SReclaimable: 336912 kB' 'SUnreclaim: 862500 kB' 'KernelStack: 27696 kB' 'PageTables: 9532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16431112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237684 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:03:57.894 22:30:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.894 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.894 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.894 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.894 22:30:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.894 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.894 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.894 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.895 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.895 22:30:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.895 22:30:42 -- setup/common.sh@33 -- # echo 0 00:03:57.895 22:30:42 -- setup/common.sh@33 -- # return 0 00:03:57.895 22:30:42 -- setup/hugepages.sh@97 -- # anon=0 00:03:57.895 22:30:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.895 22:30:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.895 22:30:42 -- setup/common.sh@18 -- # local node= 00:03:57.895 22:30:42 -- setup/common.sh@19 -- # local var val 00:03:57.895 22:30:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.895 22:30:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.895 22:30:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.895 22:30:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.896 22:30:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.896 22:30:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109574164 kB' 'MemAvailable: 109957484 kB' 'Buffers: 2736 kB' 'Cached: 15401912 kB' 'SwapCached: 0 kB' 'Active: 15643152 kB' 'Inactive: 371152 kB' 'Active(anon): 14959844 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 612428 kB' 'Mapped: 198860 kB' 'Shmem: 14350188 kB' 'KReclaimable: 336912 kB' 'Slab: 1199252 kB' 'SReclaimable: 336912 kB' 'SUnreclaim: 862340 kB' 'KernelStack: 27600 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16407384 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237508 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.896 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.896 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.897 22:30:42 -- setup/common.sh@33 -- # echo 0 00:03:57.897 22:30:42 -- setup/common.sh@33 -- # return 0 00:03:57.897 22:30:42 -- setup/hugepages.sh@99 -- # surp=0 00:03:57.897 22:30:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.897 22:30:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.897 22:30:42 -- setup/common.sh@18 -- # local node= 00:03:57.897 22:30:42 -- setup/common.sh@19 -- # local var val 00:03:57.897 22:30:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.897 22:30:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.897 22:30:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.897 22:30:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.897 22:30:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.897 22:30:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109574712 kB' 'MemAvailable: 109958032 kB' 'Buffers: 2736 kB' 'Cached: 15401932 kB' 'SwapCached: 0 kB' 'Active: 15642164 kB' 'Inactive: 371152 kB' 'Active(anon): 14958856 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 611888 kB' 'Mapped: 198784 kB' 'Shmem: 14350208 kB' 'KReclaimable: 336912 kB' 'Slab: 1199168 kB' 'SReclaimable: 336912 kB' 'SUnreclaim: 862256 kB' 'KernelStack: 27552 kB' 'PageTables: 9152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16407404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237508 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.897 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.897 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.898 22:30:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.898 22:30:42 -- setup/common.sh@33 -- # echo 0 00:03:57.898 22:30:42 -- setup/common.sh@33 -- # return 0 00:03:57.898 22:30:42 -- setup/hugepages.sh@100 -- # resv=0 00:03:57.898 22:30:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:57.898 nr_hugepages=1024 00:03:57.898 22:30:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.898 resv_hugepages=0 00:03:57.898 22:30:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.898 surplus_hugepages=0 00:03:57.898 22:30:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.898 anon_hugepages=0 00:03:57.898 22:30:42 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.898 22:30:42 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:57.898 22:30:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.898 22:30:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.898 22:30:42 -- setup/common.sh@18 -- # local node= 00:03:57.898 22:30:42 -- setup/common.sh@19 -- # local var val 00:03:57.898 22:30:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.898 22:30:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.898 22:30:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.898 22:30:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.898 22:30:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.898 22:30:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.898 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109574460 kB' 'MemAvailable: 109957780 kB' 'Buffers: 2736 kB' 'Cached: 15401944 kB' 'SwapCached: 0 kB' 'Active: 15642176 kB' 'Inactive: 371152 kB' 'Active(anon): 14958868 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 611884 kB' 'Mapped: 198784 kB' 'Shmem: 14350220 kB' 'KReclaimable: 336912 kB' 'Slab: 1199168 kB' 'SReclaimable: 336912 kB' 'SUnreclaim: 862256 kB' 'KernelStack: 27552 kB' 'PageTables: 9152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16407420 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237508 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.899 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.899 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.900 22:30:42 -- setup/common.sh@33 -- # echo 1024 00:03:57.900 22:30:42 -- setup/common.sh@33 -- # return 0 00:03:57.900 22:30:42 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.900 22:30:42 -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.900 22:30:42 -- setup/hugepages.sh@27 -- # local node 00:03:57.900 22:30:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.900 22:30:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.900 22:30:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.900 22:30:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.900 22:30:42 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.900 22:30:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.900 22:30:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.900 22:30:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.900 22:30:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.900 22:30:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.900 22:30:42 -- setup/common.sh@18 -- # local node=0 00:03:57.900 22:30:42 -- setup/common.sh@19 -- # local var val 00:03:57.900 22:30:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.900 22:30:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.900 22:30:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.900 22:30:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.900 22:30:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.900 22:30:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65610712 kB' 'MemFree: 54728848 kB' 'MemUsed: 10881864 kB' 'SwapCached: 0 kB' 'Active: 7504256 kB' 'Inactive: 116460 kB' 'Active(anon): 7131360 kB' 'Inactive(anon): 0 kB' 'Active(file): 372896 kB' 'Inactive(file): 116460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7269360 kB' 'Mapped: 114176 kB' 'AnonPages: 354540 kB' 'Shmem: 6780004 kB' 'KernelStack: 15752 kB' 'PageTables: 5204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 217424 kB' 'Slab: 670584 kB' 'SReclaimable: 217424 kB' 'SUnreclaim: 453160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.900 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.900 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@33 -- # echo 0 00:03:57.901 22:30:42 -- setup/common.sh@33 -- # return 0 00:03:57.901 22:30:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.901 22:30:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.901 22:30:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.901 22:30:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:57.901 22:30:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.901 22:30:42 -- setup/common.sh@18 -- # local node=1 00:03:57.901 22:30:42 -- setup/common.sh@19 -- # local var val 00:03:57.901 22:30:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.901 22:30:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.901 22:30:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:57.901 22:30:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:57.901 22:30:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.901 22:30:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65887864 kB' 'MemFree: 54846116 kB' 'MemUsed: 11041748 kB' 'SwapCached: 0 kB' 'Active: 8137956 kB' 'Inactive: 254692 kB' 'Active(anon): 7827544 kB' 'Inactive(anon): 0 kB' 'Active(file): 310412 kB' 'Inactive(file): 254692 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8135324 kB' 'Mapped: 84608 kB' 'AnonPages: 257404 kB' 'Shmem: 7570220 kB' 'KernelStack: 11816 kB' 'PageTables: 4000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119488 kB' 'Slab: 528584 kB' 'SReclaimable: 119488 kB' 'SUnreclaim: 409096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.901 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.901 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # continue 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.902 22:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.902 22:30:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.902 22:30:42 -- setup/common.sh@33 -- # echo 0 00:03:57.902 22:30:42 -- setup/common.sh@33 -- # return 0 00:03:57.902 22:30:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.902 22:30:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.902 22:30:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.902 22:30:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.902 22:30:42 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:57.902 node0=512 expecting 512 00:03:57.902 22:30:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.902 22:30:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.902 22:30:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.902 22:30:42 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:57.902 node1=512 expecting 512 00:03:57.902 22:30:42 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:57.902 00:03:57.902 real 0m4.277s 00:03:57.902 user 0m1.633s 00:03:57.902 sys 0m2.696s 00:03:57.902 22:30:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.902 22:30:42 -- common/autotest_common.sh@10 -- # set +x 00:03:57.902 ************************************ 00:03:57.902 END TEST even_2G_alloc 00:03:57.902 ************************************ 00:03:57.902 22:30:42 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:57.902 22:30:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:57.902 22:30:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:57.902 22:30:42 -- common/autotest_common.sh@10 -- # set +x 00:03:57.902 ************************************ 00:03:57.902 START TEST odd_alloc 00:03:57.902 ************************************ 00:03:57.902 22:30:42 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:57.902 22:30:42 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:57.902 22:30:42 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:57.902 22:30:42 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:57.902 22:30:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.902 22:30:42 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:57.902 22:30:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:57.902 22:30:42 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:57.902 22:30:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.902 22:30:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:57.902 22:30:42 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.902 22:30:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.902 22:30:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.902 22:30:42 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:57.902 22:30:42 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:57.902 22:30:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.902 22:30:42 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:57.902 22:30:42 -- setup/hugepages.sh@83 -- # : 513 00:03:57.902 22:30:42 -- setup/hugepages.sh@84 -- # : 1 00:03:57.902 22:30:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.902 22:30:42 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:57.902 22:30:42 -- setup/hugepages.sh@83 -- # : 0 00:03:57.902 22:30:42 -- setup/hugepages.sh@84 -- # : 0 00:03:57.902 22:30:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.902 22:30:42 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:57.902 22:30:42 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:57.902 22:30:42 -- setup/hugepages.sh@160 -- # setup output 00:03:57.902 22:30:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.902 22:30:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.117 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.117 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.117 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.117 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.117 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.117 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:02.117 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:02.117 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:02.117 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.117 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:02.118 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.118 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.118 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.118 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.118 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:02.118 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:02.118 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:02.118 22:30:46 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:02.118 22:30:46 -- setup/hugepages.sh@89 -- # local node 00:04:02.118 22:30:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.118 22:30:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.118 22:30:46 -- setup/hugepages.sh@92 -- # local surp 00:04:02.118 22:30:46 -- setup/hugepages.sh@93 -- # local resv 00:04:02.118 22:30:46 -- setup/hugepages.sh@94 -- # local anon 00:04:02.118 22:30:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.118 22:30:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.118 22:30:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.118 22:30:46 -- setup/common.sh@18 -- # local node= 00:04:02.118 22:30:46 -- setup/common.sh@19 -- # local var val 00:04:02.118 22:30:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.118 22:30:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.118 22:30:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.118 22:30:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.118 22:30:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.118 22:30:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109548324 kB' 'MemAvailable: 109931644 kB' 'Buffers: 2736 kB' 'Cached: 15402068 kB' 'SwapCached: 0 kB' 'Active: 15644032 kB' 'Inactive: 371152 kB' 'Active(anon): 14960724 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613712 kB' 'Mapped: 198828 kB' 'Shmem: 14350344 kB' 'KReclaimable: 336912 kB' 'Slab: 1199088 kB' 'SReclaimable: 336912 kB' 'SUnreclaim: 862176 kB' 'KernelStack: 27568 kB' 'PageTables: 9272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73088292 kB' 'Committed_AS: 16408684 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237572 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.118 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.119 22:30:46 -- setup/common.sh@33 -- # echo 0 00:04:02.119 22:30:46 -- setup/common.sh@33 -- # return 0 00:04:02.119 22:30:46 -- setup/hugepages.sh@97 -- # anon=0 00:04:02.119 22:30:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.119 22:30:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.119 22:30:46 -- setup/common.sh@18 -- # local node= 00:04:02.119 22:30:46 -- setup/common.sh@19 -- # local var val 00:04:02.119 22:30:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.119 22:30:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.119 22:30:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.119 22:30:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.119 22:30:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.119 22:30:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109551076 kB' 'MemAvailable: 109934396 kB' 'Buffers: 2736 kB' 'Cached: 15402072 kB' 'SwapCached: 0 kB' 'Active: 15644060 kB' 'Inactive: 371152 kB' 'Active(anon): 14960752 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613752 kB' 'Mapped: 198796 kB' 'Shmem: 14350348 kB' 'KReclaimable: 336912 kB' 'Slab: 1199060 kB' 'SReclaimable: 336912 kB' 'SUnreclaim: 862148 kB' 'KernelStack: 27520 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73088292 kB' 'Committed_AS: 16408696 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237556 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.120 22:30:46 -- setup/common.sh@33 -- # echo 0 00:04:02.120 22:30:46 -- setup/common.sh@33 -- # return 0 00:04:02.120 22:30:46 -- setup/hugepages.sh@99 -- # surp=0 00:04:02.120 22:30:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.120 22:30:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.120 22:30:46 -- setup/common.sh@18 -- # local node= 00:04:02.120 22:30:46 -- setup/common.sh@19 -- # local var val 00:04:02.120 22:30:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.120 22:30:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.120 22:30:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.120 22:30:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.120 22:30:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.120 22:30:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109552676 kB' 'MemAvailable: 109935996 kB' 'Buffers: 2736 kB' 'Cached: 15402084 kB' 'SwapCached: 0 kB' 'Active: 15644588 kB' 'Inactive: 371152 kB' 'Active(anon): 14961280 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614348 kB' 'Mapped: 198796 kB' 'Shmem: 14350360 kB' 'KReclaimable: 336912 kB' 'Slab: 1199060 kB' 'SReclaimable: 336912 kB' 'SUnreclaim: 862148 kB' 'KernelStack: 27568 kB' 'PageTables: 9256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73088292 kB' 'Committed_AS: 16408712 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237572 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.120 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.120 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.121 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.121 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.122 22:30:46 -- setup/common.sh@33 -- # echo 0 00:04:02.122 22:30:46 -- setup/common.sh@33 -- # return 0 00:04:02.122 22:30:46 -- setup/hugepages.sh@100 -- # resv=0 00:04:02.122 22:30:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:02.122 nr_hugepages=1025 00:04:02.122 22:30:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.122 resv_hugepages=0 00:04:02.122 22:30:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.122 surplus_hugepages=0 00:04:02.122 22:30:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.122 anon_hugepages=0 00:04:02.122 22:30:46 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:02.122 22:30:46 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:02.122 22:30:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.122 22:30:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.122 22:30:46 -- setup/common.sh@18 -- # local node= 00:04:02.122 22:30:46 -- setup/common.sh@19 -- # local var val 00:04:02.122 22:30:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.122 22:30:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.122 22:30:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.122 22:30:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.122 22:30:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.122 22:30:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109553984 kB' 'MemAvailable: 109937304 kB' 'Buffers: 2736 kB' 'Cached: 15402096 kB' 'SwapCached: 0 kB' 'Active: 15644132 kB' 'Inactive: 371152 kB' 'Active(anon): 14960824 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613756 kB' 'Mapped: 198796 kB' 'Shmem: 14350372 kB' 'KReclaimable: 336912 kB' 'Slab: 1199060 kB' 'SReclaimable: 336912 kB' 'SUnreclaim: 862148 kB' 'KernelStack: 27520 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73088292 kB' 'Committed_AS: 16408728 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237556 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.122 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.123 22:30:46 -- setup/common.sh@33 -- # echo 1025 00:04:02.123 22:30:46 -- setup/common.sh@33 -- # return 0 00:04:02.123 22:30:46 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:02.123 22:30:46 -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.123 22:30:46 -- setup/hugepages.sh@27 -- # local node 00:04:02.123 22:30:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.123 22:30:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.123 22:30:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.123 22:30:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:02.123 22:30:46 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.123 22:30:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.123 22:30:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.123 22:30:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.123 22:30:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.123 22:30:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.123 22:30:46 -- setup/common.sh@18 -- # local node=0 00:04:02.123 22:30:46 -- setup/common.sh@19 -- # local var val 00:04:02.123 22:30:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.123 22:30:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.123 22:30:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.123 22:30:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.123 22:30:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.123 22:30:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65610712 kB' 'MemFree: 54695736 kB' 'MemUsed: 10914976 kB' 'SwapCached: 0 kB' 'Active: 7503848 kB' 'Inactive: 116460 kB' 'Active(anon): 7130952 kB' 'Inactive(anon): 0 kB' 'Active(file): 372896 kB' 'Inactive(file): 116460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7269460 kB' 'Mapped: 114188 kB' 'AnonPages: 354008 kB' 'Shmem: 6780104 kB' 'KernelStack: 15656 kB' 'PageTables: 4916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 217424 kB' 'Slab: 670640 kB' 'SReclaimable: 217424 kB' 'SUnreclaim: 453216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.123 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@33 -- # echo 0 00:04:02.124 22:30:46 -- setup/common.sh@33 -- # return 0 00:04:02.124 22:30:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.124 22:30:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.124 22:30:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.124 22:30:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:02.124 22:30:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.124 22:30:46 -- setup/common.sh@18 -- # local node=1 00:04:02.124 22:30:46 -- setup/common.sh@19 -- # local var val 00:04:02.124 22:30:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.124 22:30:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.124 22:30:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:02.124 22:30:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:02.124 22:30:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.124 22:30:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65887864 kB' 'MemFree: 54858728 kB' 'MemUsed: 11029136 kB' 'SwapCached: 0 kB' 'Active: 8140924 kB' 'Inactive: 254692 kB' 'Active(anon): 7830512 kB' 'Inactive(anon): 0 kB' 'Active(file): 310412 kB' 'Inactive(file): 254692 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8135388 kB' 'Mapped: 84608 kB' 'AnonPages: 260528 kB' 'Shmem: 7570284 kB' 'KernelStack: 11880 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119488 kB' 'Slab: 528420 kB' 'SReclaimable: 119488 kB' 'SUnreclaim: 408932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # continue 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:30:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:30:46 -- setup/common.sh@33 -- # echo 0 00:04:02.125 22:30:46 -- setup/common.sh@33 -- # return 0 00:04:02.125 22:30:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.125 22:30:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.125 22:30:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.125 22:30:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.125 22:30:46 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:02.125 node0=512 expecting 513 00:04:02.125 22:30:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.125 22:30:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.125 22:30:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.125 22:30:46 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:02.125 node1=513 expecting 512 00:04:02.125 22:30:46 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:02.125 00:04:02.125 real 0m4.319s 00:04:02.125 user 0m1.760s 00:04:02.125 sys 0m2.630s 00:04:02.125 22:30:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.125 22:30:46 -- common/autotest_common.sh@10 -- # set +x 00:04:02.125 ************************************ 00:04:02.125 END TEST odd_alloc 00:04:02.125 ************************************ 00:04:02.125 22:30:46 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:02.125 22:30:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:02.125 22:30:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:02.125 22:30:46 -- common/autotest_common.sh@10 -- # set +x 00:04:02.125 ************************************ 00:04:02.125 START TEST custom_alloc 00:04:02.125 ************************************ 00:04:02.125 22:30:46 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:02.125 22:30:46 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:02.126 22:30:46 -- setup/hugepages.sh@169 -- # local node 00:04:02.126 22:30:46 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:02.126 22:30:46 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:02.126 22:30:46 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:02.126 22:30:46 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:02.126 22:30:46 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:02.126 22:30:46 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:02.126 22:30:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.126 22:30:46 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:02.126 22:30:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:02.126 22:30:46 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.126 22:30:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.126 22:30:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:02.126 22:30:46 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.126 22:30:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.126 22:30:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.126 22:30:46 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.126 22:30:46 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:02.126 22:30:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.126 22:30:46 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:02.126 22:30:46 -- setup/hugepages.sh@83 -- # : 256 00:04:02.126 22:30:46 -- setup/hugepages.sh@84 -- # : 1 00:04:02.126 22:30:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.126 22:30:46 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:02.126 22:30:46 -- setup/hugepages.sh@83 -- # : 0 00:04:02.126 22:30:46 -- setup/hugepages.sh@84 -- # : 0 00:04:02.126 22:30:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.126 22:30:46 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:02.126 22:30:46 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:02.126 22:30:46 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:02.126 22:30:46 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.126 22:30:46 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:02.126 22:30:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.126 22:30:46 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.126 22:30:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:02.126 22:30:46 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.126 22:30:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.126 22:30:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.126 22:30:46 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.126 22:30:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.126 22:30:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.126 22:30:46 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.126 22:30:46 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:02.126 22:30:46 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:02.126 22:30:46 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:02.126 22:30:46 -- setup/hugepages.sh@78 -- # return 0 00:04:02.126 22:30:46 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:02.126 22:30:46 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:02.126 22:30:46 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:02.126 22:30:46 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:02.126 22:30:46 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:02.126 22:30:46 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:02.126 22:30:46 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:02.126 22:30:46 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:02.126 22:30:46 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.126 22:30:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.126 22:30:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.126 22:30:46 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.126 22:30:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.126 22:30:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.126 22:30:46 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.126 22:30:46 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:02.126 22:30:46 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:02.126 22:30:46 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:02.126 22:30:46 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:02.126 22:30:46 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:02.126 22:30:46 -- setup/hugepages.sh@78 -- # return 0 00:04:02.126 22:30:46 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:02.126 22:30:46 -- setup/hugepages.sh@187 -- # setup output 00:04:02.126 22:30:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.126 22:30:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.375 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.376 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.376 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.376 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.376 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.376 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.376 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.376 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:06.376 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.376 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:06.376 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.376 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.376 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.376 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.376 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.376 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.376 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:06.376 22:30:51 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:06.376 22:30:51 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:06.376 22:30:51 -- setup/hugepages.sh@89 -- # local node 00:04:06.376 22:30:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.376 22:30:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.376 22:30:51 -- setup/hugepages.sh@92 -- # local surp 00:04:06.376 22:30:51 -- setup/hugepages.sh@93 -- # local resv 00:04:06.376 22:30:51 -- setup/hugepages.sh@94 -- # local anon 00:04:06.376 22:30:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.376 22:30:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.376 22:30:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.376 22:30:51 -- setup/common.sh@18 -- # local node= 00:04:06.376 22:30:51 -- setup/common.sh@19 -- # local var val 00:04:06.376 22:30:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.376 22:30:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.376 22:30:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.376 22:30:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.376 22:30:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.376 22:30:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.376 22:30:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 108505440 kB' 'MemAvailable: 108888712 kB' 'Buffers: 2736 kB' 'Cached: 15402236 kB' 'SwapCached: 0 kB' 'Active: 15645988 kB' 'Inactive: 371152 kB' 'Active(anon): 14962680 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615196 kB' 'Mapped: 198900 kB' 'Shmem: 14350512 kB' 'KReclaimable: 336816 kB' 'Slab: 1199764 kB' 'SReclaimable: 336816 kB' 'SUnreclaim: 862948 kB' 'KernelStack: 27520 kB' 'PageTables: 9268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 72565028 kB' 'Committed_AS: 16409852 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237476 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.376 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.376 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.377 22:30:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.377 22:30:51 -- setup/common.sh@33 -- # echo 0 00:04:06.377 22:30:51 -- setup/common.sh@33 -- # return 0 00:04:06.377 22:30:51 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.377 22:30:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.377 22:30:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.377 22:30:51 -- setup/common.sh@18 -- # local node= 00:04:06.377 22:30:51 -- setup/common.sh@19 -- # local var val 00:04:06.377 22:30:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.377 22:30:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.377 22:30:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.377 22:30:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.377 22:30:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.377 22:30:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.377 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 108504056 kB' 'MemAvailable: 108887328 kB' 'Buffers: 2736 kB' 'Cached: 15402240 kB' 'SwapCached: 0 kB' 'Active: 15645496 kB' 'Inactive: 371152 kB' 'Active(anon): 14962188 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614684 kB' 'Mapped: 198888 kB' 'Shmem: 14350516 kB' 'KReclaimable: 336816 kB' 'Slab: 1199748 kB' 'SReclaimable: 336816 kB' 'SUnreclaim: 862932 kB' 'KernelStack: 27488 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 72565028 kB' 'Committed_AS: 16409864 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237460 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.378 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.378 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.379 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.379 22:30:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.379 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.379 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.379 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.379 22:30:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.379 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.379 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.379 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.379 22:30:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.644 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.644 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.644 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.644 22:30:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.644 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.644 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.644 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.644 22:30:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.644 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.644 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.644 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.644 22:30:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.644 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.645 22:30:51 -- setup/common.sh@33 -- # echo 0 00:04:06.645 22:30:51 -- setup/common.sh@33 -- # return 0 00:04:06.645 22:30:51 -- setup/hugepages.sh@99 -- # surp=0 00:04:06.645 22:30:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.645 22:30:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.645 22:30:51 -- setup/common.sh@18 -- # local node= 00:04:06.645 22:30:51 -- setup/common.sh@19 -- # local var val 00:04:06.645 22:30:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.645 22:30:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.645 22:30:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.645 22:30:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.645 22:30:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.645 22:30:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 108505368 kB' 'MemAvailable: 108888640 kB' 'Buffers: 2736 kB' 'Cached: 15402252 kB' 'SwapCached: 0 kB' 'Active: 15644676 kB' 'Inactive: 371152 kB' 'Active(anon): 14961368 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614284 kB' 'Mapped: 198808 kB' 'Shmem: 14350528 kB' 'KReclaimable: 336816 kB' 'Slab: 1199748 kB' 'SReclaimable: 336816 kB' 'SUnreclaim: 862932 kB' 'KernelStack: 27456 kB' 'PageTables: 9040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 72565028 kB' 'Committed_AS: 16409880 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237460 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.645 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.645 22:30:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.646 22:30:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.646 22:30:51 -- setup/common.sh@33 -- # echo 0 00:04:06.646 22:30:51 -- setup/common.sh@33 -- # return 0 00:04:06.646 22:30:51 -- setup/hugepages.sh@100 -- # resv=0 00:04:06.646 22:30:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:06.646 nr_hugepages=1536 00:04:06.646 22:30:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.646 resv_hugepages=0 00:04:06.646 22:30:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.646 surplus_hugepages=0 00:04:06.646 22:30:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.646 anon_hugepages=0 00:04:06.646 22:30:51 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:06.646 22:30:51 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:06.646 22:30:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.646 22:30:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.646 22:30:51 -- setup/common.sh@18 -- # local node= 00:04:06.646 22:30:51 -- setup/common.sh@19 -- # local var val 00:04:06.646 22:30:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.646 22:30:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.646 22:30:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.646 22:30:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.646 22:30:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.646 22:30:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.646 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 108504784 kB' 'MemAvailable: 108888056 kB' 'Buffers: 2736 kB' 'Cached: 15402276 kB' 'SwapCached: 0 kB' 'Active: 15644292 kB' 'Inactive: 371152 kB' 'Active(anon): 14960984 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613808 kB' 'Mapped: 198808 kB' 'Shmem: 14350552 kB' 'KReclaimable: 336816 kB' 'Slab: 1199748 kB' 'SReclaimable: 336816 kB' 'SUnreclaim: 862932 kB' 'KernelStack: 27456 kB' 'PageTables: 9040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 72565028 kB' 'Committed_AS: 16409892 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237460 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.647 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.647 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.648 22:30:51 -- setup/common.sh@33 -- # echo 1536 00:04:06.648 22:30:51 -- setup/common.sh@33 -- # return 0 00:04:06.648 22:30:51 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:06.648 22:30:51 -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.648 22:30:51 -- setup/hugepages.sh@27 -- # local node 00:04:06.648 22:30:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.648 22:30:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.648 22:30:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.648 22:30:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.648 22:30:51 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.648 22:30:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.648 22:30:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.648 22:30:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.648 22:30:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.648 22:30:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.648 22:30:51 -- setup/common.sh@18 -- # local node=0 00:04:06.648 22:30:51 -- setup/common.sh@19 -- # local var val 00:04:06.648 22:30:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.648 22:30:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.648 22:30:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.648 22:30:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.648 22:30:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.648 22:30:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65610712 kB' 'MemFree: 54694104 kB' 'MemUsed: 10916608 kB' 'SwapCached: 0 kB' 'Active: 7504572 kB' 'Inactive: 116460 kB' 'Active(anon): 7131676 kB' 'Inactive(anon): 0 kB' 'Active(file): 372896 kB' 'Inactive(file): 116460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7269576 kB' 'Mapped: 114200 kB' 'AnonPages: 354756 kB' 'Shmem: 6780220 kB' 'KernelStack: 15656 kB' 'PageTables: 4992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 217328 kB' 'Slab: 670736 kB' 'SReclaimable: 217328 kB' 'SUnreclaim: 453408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.648 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.648 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@33 -- # echo 0 00:04:06.649 22:30:51 -- setup/common.sh@33 -- # return 0 00:04:06.649 22:30:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.649 22:30:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.649 22:30:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.649 22:30:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:06.649 22:30:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.649 22:30:51 -- setup/common.sh@18 -- # local node=1 00:04:06.649 22:30:51 -- setup/common.sh@19 -- # local var val 00:04:06.649 22:30:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.649 22:30:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.649 22:30:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:06.649 22:30:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:06.649 22:30:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.649 22:30:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65887864 kB' 'MemFree: 53810940 kB' 'MemUsed: 12076924 kB' 'SwapCached: 0 kB' 'Active: 8139520 kB' 'Inactive: 254692 kB' 'Active(anon): 7829108 kB' 'Inactive(anon): 0 kB' 'Active(file): 310412 kB' 'Inactive(file): 254692 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8135452 kB' 'Mapped: 84608 kB' 'AnonPages: 258836 kB' 'Shmem: 7570348 kB' 'KernelStack: 11800 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119488 kB' 'Slab: 529012 kB' 'SReclaimable: 119488 kB' 'SUnreclaim: 409524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.649 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.649 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # continue 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.650 22:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.650 22:30:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.650 22:30:51 -- setup/common.sh@33 -- # echo 0 00:04:06.650 22:30:51 -- setup/common.sh@33 -- # return 0 00:04:06.650 22:30:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.650 22:30:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.650 22:30:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.650 22:30:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.650 22:30:51 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:06.650 node0=512 expecting 512 00:04:06.650 22:30:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.650 22:30:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.650 22:30:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.650 22:30:51 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:06.650 node1=1024 expecting 1024 00:04:06.650 22:30:51 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:06.650 00:04:06.650 real 0m4.431s 00:04:06.650 user 0m1.764s 00:04:06.650 sys 0m2.735s 00:04:06.650 22:30:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.650 22:30:51 -- common/autotest_common.sh@10 -- # set +x 00:04:06.650 ************************************ 00:04:06.650 END TEST custom_alloc 00:04:06.650 ************************************ 00:04:06.650 22:30:51 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:06.650 22:30:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:06.650 22:30:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:06.650 22:30:51 -- common/autotest_common.sh@10 -- # set +x 00:04:06.650 ************************************ 00:04:06.650 START TEST no_shrink_alloc 00:04:06.650 ************************************ 00:04:06.650 22:30:51 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:06.650 22:30:51 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:06.650 22:30:51 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:06.650 22:30:51 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:06.650 22:30:51 -- setup/hugepages.sh@51 -- # shift 00:04:06.650 22:30:51 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:06.650 22:30:51 -- setup/hugepages.sh@52 -- # local node_ids 00:04:06.650 22:30:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.650 22:30:51 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:06.650 22:30:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:06.650 22:30:51 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:06.650 22:30:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.650 22:30:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.650 22:30:51 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.650 22:30:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.650 22:30:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.650 22:30:51 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:06.650 22:30:51 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:06.650 22:30:51 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:06.650 22:30:51 -- setup/hugepages.sh@73 -- # return 0 00:04:06.650 22:30:51 -- setup/hugepages.sh@198 -- # setup output 00:04:06.650 22:30:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.650 22:30:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.865 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:10.865 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:10.865 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:10.865 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:10.865 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:10.865 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:10.865 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:10.865 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:10.865 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:10.865 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:10.865 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:10.865 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:10.865 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:10.865 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:10.865 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:10.865 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:10.865 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:10.865 22:30:55 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:10.865 22:30:55 -- setup/hugepages.sh@89 -- # local node 00:04:10.865 22:30:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.865 22:30:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.865 22:30:55 -- setup/hugepages.sh@92 -- # local surp 00:04:10.865 22:30:55 -- setup/hugepages.sh@93 -- # local resv 00:04:10.865 22:30:55 -- setup/hugepages.sh@94 -- # local anon 00:04:10.865 22:30:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.865 22:30:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.865 22:30:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.865 22:30:55 -- setup/common.sh@18 -- # local node= 00:04:10.865 22:30:55 -- setup/common.sh@19 -- # local var val 00:04:10.865 22:30:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.865 22:30:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.865 22:30:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.865 22:30:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.865 22:30:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.865 22:30:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109536212 kB' 'MemAvailable: 109919484 kB' 'Buffers: 2736 kB' 'Cached: 15402384 kB' 'SwapCached: 0 kB' 'Active: 15647412 kB' 'Inactive: 371152 kB' 'Active(anon): 14964104 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616908 kB' 'Mapped: 198892 kB' 'Shmem: 14350660 kB' 'KReclaimable: 336816 kB' 'Slab: 1199972 kB' 'SReclaimable: 336816 kB' 'SUnreclaim: 863156 kB' 'KernelStack: 27632 kB' 'PageTables: 9328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16415708 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237588 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.865 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.866 22:30:55 -- setup/common.sh@33 -- # echo 0 00:04:10.866 22:30:55 -- setup/common.sh@33 -- # return 0 00:04:10.866 22:30:55 -- setup/hugepages.sh@97 -- # anon=0 00:04:10.866 22:30:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.866 22:30:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.866 22:30:55 -- setup/common.sh@18 -- # local node= 00:04:10.866 22:30:55 -- setup/common.sh@19 -- # local var val 00:04:10.866 22:30:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.866 22:30:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.866 22:30:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.866 22:30:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.866 22:30:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.866 22:30:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109541504 kB' 'MemAvailable: 109924776 kB' 'Buffers: 2736 kB' 'Cached: 15402388 kB' 'SwapCached: 0 kB' 'Active: 15647192 kB' 'Inactive: 371152 kB' 'Active(anon): 14963884 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616652 kB' 'Mapped: 198884 kB' 'Shmem: 14350664 kB' 'KReclaimable: 336816 kB' 'Slab: 1199924 kB' 'SReclaimable: 336816 kB' 'SUnreclaim: 863108 kB' 'KernelStack: 27632 kB' 'PageTables: 9548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16414076 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237556 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.866 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.866 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 22:30:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.867 22:30:55 -- setup/common.sh@33 -- # echo 0 00:04:10.867 22:30:55 -- setup/common.sh@33 -- # return 0 00:04:10.867 22:30:55 -- setup/hugepages.sh@99 -- # surp=0 00:04:10.867 22:30:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.867 22:30:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.867 22:30:55 -- setup/common.sh@18 -- # local node= 00:04:10.867 22:30:55 -- setup/common.sh@19 -- # local var val 00:04:10.867 22:30:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.867 22:30:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.867 22:30:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.867 22:30:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.867 22:30:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.867 22:30:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109542508 kB' 'MemAvailable: 109925780 kB' 'Buffers: 2736 kB' 'Cached: 15402388 kB' 'SwapCached: 0 kB' 'Active: 15647336 kB' 'Inactive: 371152 kB' 'Active(anon): 14964028 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616724 kB' 'Mapped: 198820 kB' 'Shmem: 14350664 kB' 'KReclaimable: 336816 kB' 'Slab: 1199964 kB' 'SReclaimable: 336816 kB' 'SUnreclaim: 863148 kB' 'KernelStack: 27648 kB' 'PageTables: 9740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16415736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237604 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.868 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 22:30:55 -- setup/common.sh@33 -- # echo 0 00:04:10.869 22:30:55 -- setup/common.sh@33 -- # return 0 00:04:10.869 22:30:55 -- setup/hugepages.sh@100 -- # resv=0 00:04:10.869 22:30:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:10.869 nr_hugepages=1024 00:04:10.869 22:30:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.869 resv_hugepages=0 00:04:10.869 22:30:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.869 surplus_hugepages=0 00:04:10.869 22:30:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.869 anon_hugepages=0 00:04:10.869 22:30:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.869 22:30:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:10.869 22:30:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.869 22:30:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.869 22:30:55 -- setup/common.sh@18 -- # local node= 00:04:10.869 22:30:55 -- setup/common.sh@19 -- # local var val 00:04:10.869 22:30:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.869 22:30:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.869 22:30:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.869 22:30:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.869 22:30:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.869 22:30:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109543280 kB' 'MemAvailable: 109926552 kB' 'Buffers: 2736 kB' 'Cached: 15402412 kB' 'SwapCached: 0 kB' 'Active: 15646904 kB' 'Inactive: 371152 kB' 'Active(anon): 14963596 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616248 kB' 'Mapped: 198820 kB' 'Shmem: 14350688 kB' 'KReclaimable: 336816 kB' 'Slab: 1199964 kB' 'SReclaimable: 336816 kB' 'SUnreclaim: 863148 kB' 'KernelStack: 27728 kB' 'PageTables: 9812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16414108 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237652 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 22:30:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 22:30:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.870 22:30:55 -- setup/common.sh@33 -- # echo 1024 00:04:10.870 22:30:55 -- setup/common.sh@33 -- # return 0 00:04:10.870 22:30:55 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.870 22:30:55 -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.870 22:30:55 -- setup/hugepages.sh@27 -- # local node 00:04:10.871 22:30:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.871 22:30:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:10.871 22:30:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.871 22:30:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:10.871 22:30:55 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.871 22:30:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.871 22:30:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.871 22:30:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.871 22:30:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.871 22:30:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.871 22:30:55 -- setup/common.sh@18 -- # local node=0 00:04:10.871 22:30:55 -- setup/common.sh@19 -- # local var val 00:04:10.871 22:30:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.871 22:30:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.871 22:30:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.871 22:30:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.871 22:30:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.871 22:30:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65610712 kB' 'MemFree: 53648320 kB' 'MemUsed: 11962392 kB' 'SwapCached: 0 kB' 'Active: 7506508 kB' 'Inactive: 116460 kB' 'Active(anon): 7133612 kB' 'Inactive(anon): 0 kB' 'Active(file): 372896 kB' 'Inactive(file): 116460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7269712 kB' 'Mapped: 114212 kB' 'AnonPages: 356548 kB' 'Shmem: 6780356 kB' 'KernelStack: 15736 kB' 'PageTables: 5208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 217328 kB' 'Slab: 670764 kB' 'SReclaimable: 217328 kB' 'SUnreclaim: 453436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 22:30:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.872 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.872 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 22:30:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.872 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.872 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 22:30:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.872 22:30:55 -- setup/common.sh@32 -- # continue 00:04:10.872 22:30:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 22:30:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 22:30:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.872 22:30:55 -- setup/common.sh@33 -- # echo 0 00:04:10.872 22:30:55 -- setup/common.sh@33 -- # return 0 00:04:10.872 22:30:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.872 22:30:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.872 22:30:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.872 22:30:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.872 22:30:55 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:10.872 node0=1024 expecting 1024 00:04:10.872 22:30:55 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:10.872 22:30:55 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:10.872 22:30:55 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:10.872 22:30:55 -- setup/hugepages.sh@202 -- # setup output 00:04:10.872 22:30:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.872 22:30:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:15.088 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:15.088 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:15.088 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:15.088 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:15.088 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:15.088 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:15.088 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:15.088 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:15.088 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:15.088 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:15.088 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:15.088 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:15.088 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:15.088 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:15.088 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:15.088 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:15.088 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:15.088 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:15.088 22:30:59 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:15.088 22:30:59 -- setup/hugepages.sh@89 -- # local node 00:04:15.088 22:30:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.088 22:30:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.088 22:30:59 -- setup/hugepages.sh@92 -- # local surp 00:04:15.088 22:30:59 -- setup/hugepages.sh@93 -- # local resv 00:04:15.088 22:30:59 -- setup/hugepages.sh@94 -- # local anon 00:04:15.088 22:30:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.088 22:30:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.088 22:30:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.088 22:30:59 -- setup/common.sh@18 -- # local node= 00:04:15.088 22:30:59 -- setup/common.sh@19 -- # local var val 00:04:15.088 22:30:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.088 22:30:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.088 22:30:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.088 22:30:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.088 22:30:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.088 22:30:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.088 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 22:30:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109526152 kB' 'MemAvailable: 109909424 kB' 'Buffers: 2736 kB' 'Cached: 15402536 kB' 'SwapCached: 0 kB' 'Active: 15647952 kB' 'Inactive: 371152 kB' 'Active(anon): 14964644 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616732 kB' 'Mapped: 198948 kB' 'Shmem: 14350812 kB' 'KReclaimable: 336816 kB' 'Slab: 1200016 kB' 'SReclaimable: 336816 kB' 'SUnreclaim: 863200 kB' 'KernelStack: 27504 kB' 'PageTables: 9256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16411828 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237588 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:04:15.088 22:30:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.088 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 22:30:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.088 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 22:30:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.088 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 22:30:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.088 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 22:30:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.088 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 22:30:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.088 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 22:30:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.088 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 22:30:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.089 22:30:59 -- setup/common.sh@33 -- # echo 0 00:04:15.089 22:30:59 -- setup/common.sh@33 -- # return 0 00:04:15.089 22:30:59 -- setup/hugepages.sh@97 -- # anon=0 00:04:15.089 22:30:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.089 22:30:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.089 22:30:59 -- setup/common.sh@18 -- # local node= 00:04:15.089 22:30:59 -- setup/common.sh@19 -- # local var val 00:04:15.089 22:30:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.089 22:30:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.089 22:30:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.089 22:30:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.089 22:30:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.089 22:30:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 22:30:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109526772 kB' 'MemAvailable: 109910044 kB' 'Buffers: 2736 kB' 'Cached: 15402540 kB' 'SwapCached: 0 kB' 'Active: 15647068 kB' 'Inactive: 371152 kB' 'Active(anon): 14963760 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616284 kB' 'Mapped: 198836 kB' 'Shmem: 14350816 kB' 'KReclaimable: 336816 kB' 'Slab: 1199992 kB' 'SReclaimable: 336816 kB' 'SUnreclaim: 863176 kB' 'KernelStack: 27472 kB' 'PageTables: 9100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16411840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237556 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:04:15.089 22:30:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.090 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.091 22:30:59 -- setup/common.sh@33 -- # echo 0 00:04:15.091 22:30:59 -- setup/common.sh@33 -- # return 0 00:04:15.091 22:30:59 -- setup/hugepages.sh@99 -- # surp=0 00:04:15.091 22:30:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.091 22:30:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.091 22:30:59 -- setup/common.sh@18 -- # local node= 00:04:15.091 22:30:59 -- setup/common.sh@19 -- # local var val 00:04:15.091 22:30:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.091 22:30:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.091 22:30:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.091 22:30:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.091 22:30:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.091 22:30:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109526304 kB' 'MemAvailable: 109909576 kB' 'Buffers: 2736 kB' 'Cached: 15402552 kB' 'SwapCached: 0 kB' 'Active: 15647080 kB' 'Inactive: 371152 kB' 'Active(anon): 14963772 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616276 kB' 'Mapped: 198836 kB' 'Shmem: 14350828 kB' 'KReclaimable: 336816 kB' 'Slab: 1199992 kB' 'SReclaimable: 336816 kB' 'SUnreclaim: 863176 kB' 'KernelStack: 27472 kB' 'PageTables: 9100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16411856 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237556 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 22:30:59 -- setup/common.sh@33 -- # echo 0 00:04:15.092 22:30:59 -- setup/common.sh@33 -- # return 0 00:04:15.092 22:30:59 -- setup/hugepages.sh@100 -- # resv=0 00:04:15.092 22:30:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:15.092 nr_hugepages=1024 00:04:15.092 22:30:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.092 resv_hugepages=0 00:04:15.092 22:30:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.092 surplus_hugepages=0 00:04:15.092 22:30:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.092 anon_hugepages=0 00:04:15.092 22:30:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.092 22:30:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:15.092 22:30:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.092 22:30:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.092 22:30:59 -- setup/common.sh@18 -- # local node= 00:04:15.092 22:30:59 -- setup/common.sh@19 -- # local var val 00:04:15.092 22:30:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.092 22:30:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.092 22:30:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.092 22:30:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.092 22:30:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.092 22:30:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131498576 kB' 'MemFree: 109526516 kB' 'MemAvailable: 109909788 kB' 'Buffers: 2736 kB' 'Cached: 15402552 kB' 'SwapCached: 0 kB' 'Active: 15647080 kB' 'Inactive: 371152 kB' 'Active(anon): 14963772 kB' 'Inactive(anon): 0 kB' 'Active(file): 683308 kB' 'Inactive(file): 371152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616276 kB' 'Mapped: 198836 kB' 'Shmem: 14350828 kB' 'KReclaimable: 336816 kB' 'Slab: 1199992 kB' 'SReclaimable: 336816 kB' 'SUnreclaim: 863176 kB' 'KernelStack: 27472 kB' 'PageTables: 9100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 73089316 kB' 'Committed_AS: 16411872 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237556 kB' 'VmallocChunk: 0 kB' 'Percpu: 134208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4373876 kB' 'DirectMap2M: 62414848 kB' 'DirectMap1G: 69206016 kB' 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.092 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.093 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 22:30:59 -- setup/common.sh@33 -- # echo 1024 00:04:15.094 22:30:59 -- setup/common.sh@33 -- # return 0 00:04:15.094 22:30:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.094 22:30:59 -- setup/hugepages.sh@112 -- # get_nodes 00:04:15.094 22:30:59 -- setup/hugepages.sh@27 -- # local node 00:04:15.094 22:30:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.094 22:30:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:15.094 22:30:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.094 22:30:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:15.094 22:30:59 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:15.094 22:30:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.094 22:30:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.094 22:30:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.094 22:30:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:15.094 22:30:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.094 22:30:59 -- setup/common.sh@18 -- # local node=0 00:04:15.094 22:30:59 -- setup/common.sh@19 -- # local var val 00:04:15.094 22:30:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.094 22:30:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.094 22:30:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.094 22:30:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.094 22:30:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.094 22:30:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.094 22:30:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65610712 kB' 'MemFree: 53645032 kB' 'MemUsed: 11965680 kB' 'SwapCached: 0 kB' 'Active: 7506568 kB' 'Inactive: 116460 kB' 'Active(anon): 7133672 kB' 'Inactive(anon): 0 kB' 'Active(file): 372896 kB' 'Inactive(file): 116460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7269808 kB' 'Mapped: 114224 kB' 'AnonPages: 356412 kB' 'Shmem: 6780452 kB' 'KernelStack: 15640 kB' 'PageTables: 4908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 217328 kB' 'Slab: 670620 kB' 'SReclaimable: 217328 kB' 'SUnreclaim: 453292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 22:30:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # continue 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 22:30:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 22:30:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 22:30:59 -- setup/common.sh@33 -- # echo 0 00:04:15.095 22:30:59 -- setup/common.sh@33 -- # return 0 00:04:15.095 22:30:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.095 22:30:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.095 22:30:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.095 22:30:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.095 22:30:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:15.095 node0=1024 expecting 1024 00:04:15.095 22:30:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:15.095 00:04:15.095 real 0m8.484s 00:04:15.095 user 0m3.289s 00:04:15.095 sys 0m5.287s 00:04:15.095 22:30:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.095 22:30:59 -- common/autotest_common.sh@10 -- # set +x 00:04:15.095 ************************************ 00:04:15.095 END TEST no_shrink_alloc 00:04:15.095 ************************************ 00:04:15.095 22:30:59 -- setup/hugepages.sh@217 -- # clear_hp 00:04:15.095 22:30:59 -- setup/hugepages.sh@37 -- # local node hp 00:04:15.095 22:30:59 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:15.095 22:30:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.095 22:30:59 -- setup/hugepages.sh@41 -- # echo 0 00:04:15.095 22:30:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.357 22:30:59 -- setup/hugepages.sh@41 -- # echo 0 00:04:15.357 22:30:59 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:15.357 22:30:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.357 22:30:59 -- setup/hugepages.sh@41 -- # echo 0 00:04:15.357 22:30:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.357 22:30:59 -- setup/hugepages.sh@41 -- # echo 0 00:04:15.357 22:30:59 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:15.357 22:30:59 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:15.357 00:04:15.357 real 0m30.490s 00:04:15.357 user 0m11.809s 00:04:15.357 sys 0m19.000s 00:04:15.357 22:30:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.357 22:30:59 -- common/autotest_common.sh@10 -- # set +x 00:04:15.357 ************************************ 00:04:15.357 END TEST hugepages 00:04:15.357 ************************************ 00:04:15.357 22:30:59 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:15.357 22:30:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:15.357 22:30:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:15.357 22:30:59 -- common/autotest_common.sh@10 -- # set +x 00:04:15.357 ************************************ 00:04:15.357 START TEST driver 00:04:15.357 ************************************ 00:04:15.357 22:30:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:15.357 * Looking for test storage... 00:04:15.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:15.357 22:31:00 -- setup/driver.sh@68 -- # setup reset 00:04:15.357 22:31:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.357 22:31:00 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.651 22:31:05 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:20.651 22:31:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:20.651 22:31:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:20.651 22:31:05 -- common/autotest_common.sh@10 -- # set +x 00:04:20.651 ************************************ 00:04:20.651 START TEST guess_driver 00:04:20.651 ************************************ 00:04:20.651 22:31:05 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:20.651 22:31:05 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:20.651 22:31:05 -- setup/driver.sh@47 -- # local fail=0 00:04:20.651 22:31:05 -- setup/driver.sh@49 -- # pick_driver 00:04:20.651 22:31:05 -- setup/driver.sh@36 -- # vfio 00:04:20.651 22:31:05 -- setup/driver.sh@21 -- # local iommu_grups 00:04:20.651 22:31:05 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:20.651 22:31:05 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:20.651 22:31:05 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:20.651 22:31:05 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:20.651 22:31:05 -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:04:20.651 22:31:05 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:20.651 22:31:05 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:20.651 22:31:05 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:20.651 22:31:05 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:20.651 22:31:05 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:20.651 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:20.651 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:20.651 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:20.651 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:20.651 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:20.651 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:20.651 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:20.651 22:31:05 -- setup/driver.sh@30 -- # return 0 00:04:20.651 22:31:05 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:20.651 22:31:05 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:20.651 22:31:05 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:20.651 22:31:05 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:20.651 Looking for driver=vfio-pci 00:04:20.651 22:31:05 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.651 22:31:05 -- setup/driver.sh@45 -- # setup output config 00:04:20.651 22:31:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.651 22:31:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:23.953 22:31:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.953 22:31:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.953 22:31:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.213 22:31:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.213 22:31:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.213 22:31:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.213 22:31:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.213 22:31:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.213 22:31:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.213 22:31:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.213 22:31:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.213 22:31:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.213 22:31:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.213 22:31:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.213 22:31:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.213 22:31:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.213 22:31:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.213 22:31:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.213 22:31:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.213 22:31:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.213 22:31:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.213 22:31:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.213 22:31:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.213 22:31:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.213 22:31:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.213 22:31:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.213 22:31:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.213 22:31:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.213 22:31:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.213 22:31:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.214 22:31:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.214 22:31:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.214 22:31:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.214 22:31:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.214 22:31:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.214 22:31:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.214 22:31:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.214 22:31:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.214 22:31:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.214 22:31:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.214 22:31:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.214 22:31:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.214 22:31:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.214 22:31:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.214 22:31:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.214 22:31:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.214 22:31:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.214 22:31:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.214 22:31:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.214 22:31:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.214 22:31:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.792 22:31:09 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:24.792 22:31:09 -- setup/driver.sh@65 -- # setup reset 00:04:24.792 22:31:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.792 22:31:09 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.082 00:04:30.082 real 0m9.553s 00:04:30.082 user 0m3.157s 00:04:30.082 sys 0m5.557s 00:04:30.082 22:31:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.082 22:31:14 -- common/autotest_common.sh@10 -- # set +x 00:04:30.082 ************************************ 00:04:30.082 END TEST guess_driver 00:04:30.082 ************************************ 00:04:30.082 00:04:30.082 real 0m14.789s 00:04:30.082 user 0m4.720s 00:04:30.082 sys 0m8.422s 00:04:30.082 22:31:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.082 22:31:14 -- common/autotest_common.sh@10 -- # set +x 00:04:30.082 ************************************ 00:04:30.082 END TEST driver 00:04:30.082 ************************************ 00:04:30.082 22:31:14 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:30.082 22:31:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.082 22:31:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.082 22:31:14 -- common/autotest_common.sh@10 -- # set +x 00:04:30.082 ************************************ 00:04:30.082 START TEST devices 00:04:30.082 ************************************ 00:04:30.082 22:31:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:30.082 * Looking for test storage... 00:04:30.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:30.082 22:31:14 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:30.082 22:31:14 -- setup/devices.sh@192 -- # setup reset 00:04:30.082 22:31:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.082 22:31:14 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.404 22:31:19 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:35.404 22:31:19 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:35.404 22:31:19 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:35.404 22:31:19 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:35.404 22:31:19 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:35.404 22:31:19 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:35.404 22:31:19 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:35.404 22:31:19 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:35.404 22:31:19 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:35.404 22:31:19 -- setup/devices.sh@196 -- # blocks=() 00:04:35.404 22:31:19 -- setup/devices.sh@196 -- # declare -a blocks 00:04:35.404 22:31:19 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:35.404 22:31:19 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:35.404 22:31:19 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:35.404 22:31:19 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:35.404 22:31:19 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:35.404 22:31:19 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:35.404 22:31:19 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:35.404 22:31:19 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:35.404 22:31:19 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:35.404 22:31:19 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:35.404 22:31:19 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:35.404 No valid GPT data, bailing 00:04:35.404 22:31:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:35.404 22:31:19 -- scripts/common.sh@393 -- # pt= 00:04:35.404 22:31:19 -- scripts/common.sh@394 -- # return 1 00:04:35.404 22:31:19 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:35.404 22:31:19 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:35.404 22:31:19 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:35.404 22:31:19 -- setup/common.sh@80 -- # echo 1920383410176 00:04:35.404 22:31:19 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:35.404 22:31:19 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:35.404 22:31:19 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:35.404 22:31:19 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:35.404 22:31:19 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:35.404 22:31:19 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:35.404 22:31:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:35.404 22:31:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:35.404 22:31:19 -- common/autotest_common.sh@10 -- # set +x 00:04:35.404 ************************************ 00:04:35.404 START TEST nvme_mount 00:04:35.404 ************************************ 00:04:35.404 22:31:19 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:35.405 22:31:19 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:35.405 22:31:19 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:35.405 22:31:19 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.405 22:31:19 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.405 22:31:19 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:35.405 22:31:19 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:35.405 22:31:19 -- setup/common.sh@40 -- # local part_no=1 00:04:35.405 22:31:19 -- setup/common.sh@41 -- # local size=1073741824 00:04:35.405 22:31:19 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:35.405 22:31:19 -- setup/common.sh@44 -- # parts=() 00:04:35.405 22:31:19 -- setup/common.sh@44 -- # local parts 00:04:35.405 22:31:19 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:35.405 22:31:19 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.405 22:31:19 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:35.405 22:31:19 -- setup/common.sh@46 -- # (( part++ )) 00:04:35.405 22:31:19 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.405 22:31:19 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:35.405 22:31:19 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:35.405 22:31:19 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:35.665 Creating new GPT entries in memory. 00:04:35.665 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:35.665 other utilities. 00:04:35.665 22:31:20 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:35.665 22:31:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.665 22:31:20 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:35.665 22:31:20 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:35.665 22:31:20 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:36.607 Creating new GPT entries in memory. 00:04:36.607 The operation has completed successfully. 00:04:36.607 22:31:21 -- setup/common.sh@57 -- # (( part++ )) 00:04:36.607 22:31:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.607 22:31:21 -- setup/common.sh@62 -- # wait 871073 00:04:36.607 22:31:21 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.607 22:31:21 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:36.607 22:31:21 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.607 22:31:21 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:36.607 22:31:21 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:36.867 22:31:21 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.867 22:31:21 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:36.867 22:31:21 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:36.867 22:31:21 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:36.867 22:31:21 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.867 22:31:21 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:36.867 22:31:21 -- setup/devices.sh@53 -- # local found=0 00:04:36.867 22:31:21 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:36.867 22:31:21 -- setup/devices.sh@56 -- # : 00:04:36.867 22:31:21 -- setup/devices.sh@59 -- # local pci status 00:04:36.867 22:31:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.867 22:31:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:36.867 22:31:21 -- setup/devices.sh@47 -- # setup output config 00:04:36.867 22:31:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.867 22:31:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:40.168 22:31:24 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.168 22:31:24 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:40.168 22:31:24 -- setup/devices.sh@63 -- # found=1 00:04:40.168 22:31:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.168 22:31:24 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.168 22:31:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.168 22:31:24 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.168 22:31:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.168 22:31:24 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.168 22:31:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.168 22:31:24 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.168 22:31:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.168 22:31:24 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.168 22:31:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.168 22:31:24 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.168 22:31:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.168 22:31:24 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.168 22:31:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.168 22:31:24 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.168 22:31:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.168 22:31:24 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.168 22:31:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.168 22:31:24 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.168 22:31:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.168 22:31:24 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.168 22:31:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.168 22:31:24 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.168 22:31:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.168 22:31:24 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.169 22:31:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.169 22:31:24 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.169 22:31:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.169 22:31:24 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.169 22:31:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.169 22:31:24 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.169 22:31:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.429 22:31:25 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:40.429 22:31:25 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:40.429 22:31:25 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.429 22:31:25 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:40.429 22:31:25 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:40.429 22:31:25 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:40.429 22:31:25 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.429 22:31:25 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.429 22:31:25 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:40.429 22:31:25 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:40.429 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:40.429 22:31:25 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:40.429 22:31:25 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:40.689 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:40.689 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:40.689 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:40.689 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:40.689 22:31:25 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:40.689 22:31:25 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:40.689 22:31:25 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.689 22:31:25 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:40.689 22:31:25 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:40.950 22:31:25 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.950 22:31:25 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:40.950 22:31:25 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:40.950 22:31:25 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:40.950 22:31:25 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.951 22:31:25 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:40.951 22:31:25 -- setup/devices.sh@53 -- # local found=0 00:04:40.951 22:31:25 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:40.951 22:31:25 -- setup/devices.sh@56 -- # : 00:04:40.951 22:31:25 -- setup/devices.sh@59 -- # local pci status 00:04:40.951 22:31:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.951 22:31:25 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:40.951 22:31:25 -- setup/devices.sh@47 -- # setup output config 00:04:40.951 22:31:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.951 22:31:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:45.154 22:31:29 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.154 22:31:29 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:45.154 22:31:29 -- setup/devices.sh@63 -- # found=1 00:04:45.154 22:31:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.154 22:31:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.154 22:31:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.154 22:31:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.154 22:31:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.154 22:31:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.154 22:31:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.154 22:31:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.154 22:31:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.154 22:31:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.154 22:31:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.154 22:31:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.154 22:31:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.154 22:31:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.154 22:31:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.154 22:31:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.154 22:31:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.154 22:31:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.154 22:31:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.154 22:31:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.154 22:31:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.154 22:31:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.154 22:31:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.154 22:31:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.154 22:31:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.154 22:31:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.154 22:31:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.154 22:31:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.154 22:31:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.154 22:31:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.155 22:31:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.155 22:31:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.155 22:31:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.155 22:31:29 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:45.155 22:31:29 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:45.155 22:31:29 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.155 22:31:29 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:45.155 22:31:29 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:45.155 22:31:29 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.155 22:31:29 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:45.155 22:31:29 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:45.155 22:31:29 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:45.155 22:31:29 -- setup/devices.sh@50 -- # local mount_point= 00:04:45.155 22:31:29 -- setup/devices.sh@51 -- # local test_file= 00:04:45.155 22:31:29 -- setup/devices.sh@53 -- # local found=0 00:04:45.155 22:31:29 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:45.155 22:31:29 -- setup/devices.sh@59 -- # local pci status 00:04:45.155 22:31:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.155 22:31:29 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:45.155 22:31:29 -- setup/devices.sh@47 -- # setup output config 00:04:45.155 22:31:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.155 22:31:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:49.362 22:31:33 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.362 22:31:33 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:49.362 22:31:33 -- setup/devices.sh@63 -- # found=1 00:04:49.362 22:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.362 22:31:33 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.362 22:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.362 22:31:33 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.362 22:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.362 22:31:33 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.362 22:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.362 22:31:33 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.362 22:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.362 22:31:33 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.362 22:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.362 22:31:33 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.362 22:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.362 22:31:33 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.362 22:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.362 22:31:33 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.362 22:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.362 22:31:33 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.362 22:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.362 22:31:33 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.362 22:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.362 22:31:33 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.362 22:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.362 22:31:33 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.362 22:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.362 22:31:33 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.362 22:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.362 22:31:33 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.362 22:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.362 22:31:33 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.362 22:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.362 22:31:33 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.362 22:31:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.362 22:31:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.362 22:31:33 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:49.362 22:31:33 -- setup/devices.sh@68 -- # return 0 00:04:49.362 22:31:33 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:49.362 22:31:33 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.362 22:31:33 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:49.362 22:31:33 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:49.362 22:31:33 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:49.362 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:49.362 00:04:49.362 real 0m14.427s 00:04:49.362 user 0m4.509s 00:04:49.362 sys 0m7.779s 00:04:49.362 22:31:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.362 22:31:33 -- common/autotest_common.sh@10 -- # set +x 00:04:49.362 ************************************ 00:04:49.362 END TEST nvme_mount 00:04:49.362 ************************************ 00:04:49.362 22:31:33 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:49.362 22:31:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.362 22:31:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.362 22:31:33 -- common/autotest_common.sh@10 -- # set +x 00:04:49.362 ************************************ 00:04:49.362 START TEST dm_mount 00:04:49.362 ************************************ 00:04:49.362 22:31:33 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:49.362 22:31:33 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:49.362 22:31:33 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:49.362 22:31:33 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:49.362 22:31:33 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:49.362 22:31:33 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:49.362 22:31:33 -- setup/common.sh@40 -- # local part_no=2 00:04:49.362 22:31:33 -- setup/common.sh@41 -- # local size=1073741824 00:04:49.362 22:31:33 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:49.362 22:31:33 -- setup/common.sh@44 -- # parts=() 00:04:49.362 22:31:33 -- setup/common.sh@44 -- # local parts 00:04:49.362 22:31:33 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:49.362 22:31:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.362 22:31:33 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:49.362 22:31:33 -- setup/common.sh@46 -- # (( part++ )) 00:04:49.362 22:31:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.362 22:31:33 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:49.362 22:31:33 -- setup/common.sh@46 -- # (( part++ )) 00:04:49.362 22:31:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.362 22:31:33 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:49.362 22:31:33 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:49.362 22:31:33 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:50.305 Creating new GPT entries in memory. 00:04:50.305 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:50.305 other utilities. 00:04:50.305 22:31:34 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:50.305 22:31:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.305 22:31:34 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:50.305 22:31:34 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:50.305 22:31:34 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:51.270 Creating new GPT entries in memory. 00:04:51.270 The operation has completed successfully. 00:04:51.270 22:31:35 -- setup/common.sh@57 -- # (( part++ )) 00:04:51.270 22:31:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:51.270 22:31:35 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:51.270 22:31:35 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:51.270 22:31:35 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:52.213 The operation has completed successfully. 00:04:52.213 22:31:36 -- setup/common.sh@57 -- # (( part++ )) 00:04:52.213 22:31:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.213 22:31:36 -- setup/common.sh@62 -- # wait 876958 00:04:52.213 22:31:36 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:52.213 22:31:36 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:52.213 22:31:36 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:52.213 22:31:36 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:52.213 22:31:36 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:52.213 22:31:36 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:52.213 22:31:36 -- setup/devices.sh@161 -- # break 00:04:52.213 22:31:36 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:52.213 22:31:36 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:52.213 22:31:36 -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:52.213 22:31:36 -- setup/devices.sh@166 -- # dm=dm-1 00:04:52.213 22:31:36 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:52.213 22:31:36 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:52.213 22:31:36 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:52.213 22:31:36 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:52.213 22:31:36 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:52.213 22:31:36 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:52.213 22:31:36 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:52.213 22:31:36 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:52.213 22:31:36 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:52.213 22:31:36 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:52.213 22:31:36 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:52.213 22:31:36 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:52.213 22:31:36 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:52.213 22:31:36 -- setup/devices.sh@53 -- # local found=0 00:04:52.213 22:31:36 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:52.213 22:31:36 -- setup/devices.sh@56 -- # : 00:04:52.213 22:31:36 -- setup/devices.sh@59 -- # local pci status 00:04:52.213 22:31:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.213 22:31:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:52.213 22:31:36 -- setup/devices.sh@47 -- # setup output config 00:04:52.213 22:31:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.213 22:31:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:56.421 22:31:40 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.421 22:31:40 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:56.421 22:31:40 -- setup/devices.sh@63 -- # found=1 00:04:56.421 22:31:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.421 22:31:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.421 22:31:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.421 22:31:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.421 22:31:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.421 22:31:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.421 22:31:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.421 22:31:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.421 22:31:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.421 22:31:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.421 22:31:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.421 22:31:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.421 22:31:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.421 22:31:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.421 22:31:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.421 22:31:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.421 22:31:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.421 22:31:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.421 22:31:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.421 22:31:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.421 22:31:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.421 22:31:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.421 22:31:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.421 22:31:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.421 22:31:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.421 22:31:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.421 22:31:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.421 22:31:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.421 22:31:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.421 22:31:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.421 22:31:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.421 22:31:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.421 22:31:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.421 22:31:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.421 22:31:41 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:56.421 22:31:41 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:56.421 22:31:41 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:56.421 22:31:41 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:56.421 22:31:41 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:56.421 22:31:41 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:56.421 22:31:41 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:56.421 22:31:41 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:56.421 22:31:41 -- setup/devices.sh@50 -- # local mount_point= 00:04:56.421 22:31:41 -- setup/devices.sh@51 -- # local test_file= 00:04:56.421 22:31:41 -- setup/devices.sh@53 -- # local found=0 00:04:56.421 22:31:41 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:56.421 22:31:41 -- setup/devices.sh@59 -- # local pci status 00:04:56.421 22:31:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.421 22:31:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:56.421 22:31:41 -- setup/devices.sh@47 -- # setup output config 00:04:56.421 22:31:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.421 22:31:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:00.627 22:31:44 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.628 22:31:44 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:05:00.628 22:31:44 -- setup/devices.sh@63 -- # found=1 00:05:00.628 22:31:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.628 22:31:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.628 22:31:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.628 22:31:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.628 22:31:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.628 22:31:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.628 22:31:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.628 22:31:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.628 22:31:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.628 22:31:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.628 22:31:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.628 22:31:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.628 22:31:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.628 22:31:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.628 22:31:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.628 22:31:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.628 22:31:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.628 22:31:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.628 22:31:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.628 22:31:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.628 22:31:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.628 22:31:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.628 22:31:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.628 22:31:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.628 22:31:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.628 22:31:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.628 22:31:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.628 22:31:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.628 22:31:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.628 22:31:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.628 22:31:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.628 22:31:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:00.628 22:31:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.628 22:31:45 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.628 22:31:45 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:00.628 22:31:45 -- setup/devices.sh@68 -- # return 0 00:05:00.628 22:31:45 -- setup/devices.sh@187 -- # cleanup_dm 00:05:00.628 22:31:45 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:00.628 22:31:45 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:00.628 22:31:45 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:00.628 22:31:45 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.628 22:31:45 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:00.628 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:00.628 22:31:45 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:00.628 22:31:45 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:00.628 00:05:00.628 real 0m11.289s 00:05:00.628 user 0m3.161s 00:05:00.628 sys 0m5.198s 00:05:00.628 22:31:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.628 22:31:45 -- common/autotest_common.sh@10 -- # set +x 00:05:00.628 ************************************ 00:05:00.628 END TEST dm_mount 00:05:00.628 ************************************ 00:05:00.628 22:31:45 -- setup/devices.sh@1 -- # cleanup 00:05:00.628 22:31:45 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:00.628 22:31:45 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.628 22:31:45 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.628 22:31:45 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:00.628 22:31:45 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:00.628 22:31:45 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:00.628 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:00.628 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:00.628 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:00.628 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:00.628 22:31:45 -- setup/devices.sh@12 -- # cleanup_dm 00:05:00.628 22:31:45 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:00.628 22:31:45 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:00.628 22:31:45 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.628 22:31:45 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:00.628 22:31:45 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:00.628 22:31:45 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:00.628 00:05:00.628 real 0m30.637s 00:05:00.628 user 0m9.423s 00:05:00.628 sys 0m16.021s 00:05:00.628 22:31:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.628 22:31:45 -- common/autotest_common.sh@10 -- # set +x 00:05:00.628 ************************************ 00:05:00.628 END TEST devices 00:05:00.628 ************************************ 00:05:00.889 00:05:00.889 real 1m44.040s 00:05:00.889 user 0m35.250s 00:05:00.889 sys 1m0.015s 00:05:00.889 22:31:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.889 22:31:45 -- common/autotest_common.sh@10 -- # set +x 00:05:00.889 ************************************ 00:05:00.889 END TEST setup.sh 00:05:00.889 ************************************ 00:05:00.889 22:31:45 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:05.133 Hugepages 00:05:05.133 node hugesize free / total 00:05:05.133 node0 1048576kB 0 / 0 00:05:05.133 node0 2048kB 2048 / 2048 00:05:05.133 node1 1048576kB 0 / 0 00:05:05.133 node1 2048kB 0 / 0 00:05:05.133 00:05:05.133 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:05.133 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:05.133 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:05.133 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:05.133 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:05.133 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:05.133 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:05.133 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:05.133 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:05.133 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:05.133 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:05.133 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:05.133 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:05.133 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:05.133 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:05.133 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:05.133 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:05.133 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:05.133 22:31:49 -- spdk/autotest.sh@141 -- # uname -s 00:05:05.133 22:31:49 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:05.133 22:31:49 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:05.133 22:31:49 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:08.437 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:08.437 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:08.437 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:08.437 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:08.437 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:08.437 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:08.437 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:08.437 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:08.437 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:08.437 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:08.698 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:08.698 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:08.698 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:08.698 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:08.698 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:08.698 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:10.612 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:10.612 22:31:55 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:12.000 22:31:56 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:12.000 22:31:56 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:12.000 22:31:56 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:12.000 22:31:56 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:12.000 22:31:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:12.000 22:31:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:12.000 22:31:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:12.000 22:31:56 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:12.000 22:31:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:12.000 22:31:56 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:12.000 22:31:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:12.000 22:31:56 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:16.216 Waiting for block devices as requested 00:05:16.216 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:16.216 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:16.216 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:16.216 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:16.216 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:16.216 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:16.216 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:16.216 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:16.216 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:16.477 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:16.477 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:16.477 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:16.477 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:16.738 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:16.738 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:16.738 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:16.738 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:16.998 22:32:01 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:16.998 22:32:01 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:16.998 22:32:01 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:16.998 22:32:01 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:05:16.999 22:32:01 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:16.999 22:32:01 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:17.260 22:32:01 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:17.260 22:32:01 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:17.260 22:32:01 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:17.260 22:32:01 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:17.260 22:32:01 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:17.260 22:32:01 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:17.260 22:32:01 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:17.260 22:32:01 -- common/autotest_common.sh@1530 -- # oacs=' 0x5f' 00:05:17.260 22:32:01 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:17.260 22:32:01 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:17.260 22:32:01 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:17.260 22:32:01 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:17.260 22:32:01 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:17.260 22:32:01 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:17.260 22:32:01 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:17.260 22:32:01 -- common/autotest_common.sh@1542 -- # continue 00:05:17.260 22:32:01 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:17.260 22:32:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:17.260 22:32:01 -- common/autotest_common.sh@10 -- # set +x 00:05:17.260 22:32:01 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:17.260 22:32:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:17.260 22:32:01 -- common/autotest_common.sh@10 -- # set +x 00:05:17.260 22:32:01 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:21.474 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:21.474 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:21.474 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:21.474 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:21.474 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:21.474 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:21.474 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:21.474 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:21.474 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:21.474 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:21.474 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:21.474 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:21.474 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:21.474 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:21.474 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:21.474 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:21.474 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:21.474 22:32:06 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:21.474 22:32:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:21.474 22:32:06 -- common/autotest_common.sh@10 -- # set +x 00:05:21.474 22:32:06 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:21.474 22:32:06 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:21.474 22:32:06 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:21.474 22:32:06 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:21.474 22:32:06 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:21.474 22:32:06 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:21.474 22:32:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:21.474 22:32:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:21.474 22:32:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:21.474 22:32:06 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:21.474 22:32:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:21.736 22:32:06 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:21.736 22:32:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:21.736 22:32:06 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:21.736 22:32:06 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:21.736 22:32:06 -- common/autotest_common.sh@1565 -- # device=0xa80a 00:05:21.736 22:32:06 -- common/autotest_common.sh@1566 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:21.736 22:32:06 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:21.736 22:32:06 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:21.736 22:32:06 -- common/autotest_common.sh@1578 -- # return 0 00:05:21.736 22:32:06 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:21.736 22:32:06 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:21.736 22:32:06 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:21.736 22:32:06 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:21.736 22:32:06 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:21.736 22:32:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:21.736 22:32:06 -- common/autotest_common.sh@10 -- # set +x 00:05:21.736 22:32:06 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:21.736 22:32:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:21.736 22:32:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.736 22:32:06 -- common/autotest_common.sh@10 -- # set +x 00:05:21.736 ************************************ 00:05:21.736 START TEST env 00:05:21.736 ************************************ 00:05:21.736 22:32:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:21.736 * Looking for test storage... 00:05:21.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:21.736 22:32:06 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:21.736 22:32:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:21.736 22:32:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.736 22:32:06 -- common/autotest_common.sh@10 -- # set +x 00:05:21.736 ************************************ 00:05:21.736 START TEST env_memory 00:05:21.736 ************************************ 00:05:21.736 22:32:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:21.736 00:05:21.736 00:05:21.736 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.736 http://cunit.sourceforge.net/ 00:05:21.736 00:05:21.736 00:05:21.736 Suite: memory 00:05:21.736 Test: alloc and free memory map ...[2024-04-15 22:32:06.538785] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:21.999 passed 00:05:21.999 Test: mem map translation ...[2024-04-15 22:32:06.564490] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:21.999 [2024-04-15 22:32:06.564523] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:21.999 [2024-04-15 22:32:06.564578] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:21.999 [2024-04-15 22:32:06.564586] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:21.999 passed 00:05:21.999 Test: mem map registration ...[2024-04-15 22:32:06.619798] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:21.999 [2024-04-15 22:32:06.619822] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:21.999 passed 00:05:21.999 Test: mem map adjacent registrations ...passed 00:05:21.999 00:05:21.999 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.999 suites 1 1 n/a 0 0 00:05:21.999 tests 4 4 4 0 0 00:05:21.999 asserts 152 152 152 0 n/a 00:05:21.999 00:05:21.999 Elapsed time = 0.194 seconds 00:05:21.999 00:05:21.999 real 0m0.206s 00:05:21.999 user 0m0.197s 00:05:21.999 sys 0m0.009s 00:05:21.999 22:32:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.999 22:32:06 -- common/autotest_common.sh@10 -- # set +x 00:05:21.999 ************************************ 00:05:21.999 END TEST env_memory 00:05:21.999 ************************************ 00:05:21.999 22:32:06 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:21.999 22:32:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:21.999 22:32:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.999 22:32:06 -- common/autotest_common.sh@10 -- # set +x 00:05:21.999 ************************************ 00:05:21.999 START TEST env_vtophys 00:05:21.999 ************************************ 00:05:21.999 22:32:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:21.999 EAL: lib.eal log level changed from notice to debug 00:05:21.999 EAL: Detected lcore 0 as core 0 on socket 0 00:05:21.999 EAL: Detected lcore 1 as core 1 on socket 0 00:05:21.999 EAL: Detected lcore 2 as core 2 on socket 0 00:05:21.999 EAL: Detected lcore 3 as core 3 on socket 0 00:05:21.999 EAL: Detected lcore 4 as core 4 on socket 0 00:05:21.999 EAL: Detected lcore 5 as core 5 on socket 0 00:05:21.999 EAL: Detected lcore 6 as core 6 on socket 0 00:05:21.999 EAL: Detected lcore 7 as core 7 on socket 0 00:05:21.999 EAL: Detected lcore 8 as core 8 on socket 0 00:05:21.999 EAL: Detected lcore 9 as core 9 on socket 0 00:05:21.999 EAL: Detected lcore 10 as core 10 on socket 0 00:05:21.999 EAL: Detected lcore 11 as core 11 on socket 0 00:05:21.999 EAL: Detected lcore 12 as core 12 on socket 0 00:05:21.999 EAL: Detected lcore 13 as core 13 on socket 0 00:05:21.999 EAL: Detected lcore 14 as core 14 on socket 0 00:05:21.999 EAL: Detected lcore 15 as core 15 on socket 0 00:05:21.999 EAL: Detected lcore 16 as core 16 on socket 0 00:05:21.999 EAL: Detected lcore 17 as core 17 on socket 0 00:05:21.999 EAL: Detected lcore 18 as core 18 on socket 0 00:05:21.999 EAL: Detected lcore 19 as core 19 on socket 0 00:05:21.999 EAL: Detected lcore 20 as core 20 on socket 0 00:05:21.999 EAL: Detected lcore 21 as core 21 on socket 0 00:05:21.999 EAL: Detected lcore 22 as core 22 on socket 0 00:05:21.999 EAL: Detected lcore 23 as core 23 on socket 0 00:05:21.999 EAL: Detected lcore 24 as core 24 on socket 0 00:05:21.999 EAL: Detected lcore 25 as core 25 on socket 0 00:05:21.999 EAL: Detected lcore 26 as core 26 on socket 0 00:05:21.999 EAL: Detected lcore 27 as core 27 on socket 0 00:05:21.999 EAL: Detected lcore 28 as core 28 on socket 0 00:05:21.999 EAL: Detected lcore 29 as core 29 on socket 0 00:05:21.999 EAL: Detected lcore 30 as core 30 on socket 0 00:05:21.999 EAL: Detected lcore 31 as core 31 on socket 0 00:05:21.999 EAL: Detected lcore 32 as core 32 on socket 0 00:05:21.999 EAL: Detected lcore 33 as core 33 on socket 0 00:05:21.999 EAL: Detected lcore 34 as core 34 on socket 0 00:05:21.999 EAL: Detected lcore 35 as core 35 on socket 0 00:05:21.999 EAL: Detected lcore 36 as core 0 on socket 1 00:05:21.999 EAL: Detected lcore 37 as core 1 on socket 1 00:05:21.999 EAL: Detected lcore 38 as core 2 on socket 1 00:05:21.999 EAL: Detected lcore 39 as core 3 on socket 1 00:05:21.999 EAL: Detected lcore 40 as core 4 on socket 1 00:05:21.999 EAL: Detected lcore 41 as core 5 on socket 1 00:05:21.999 EAL: Detected lcore 42 as core 6 on socket 1 00:05:21.999 EAL: Detected lcore 43 as core 7 on socket 1 00:05:21.999 EAL: Detected lcore 44 as core 8 on socket 1 00:05:21.999 EAL: Detected lcore 45 as core 9 on socket 1 00:05:21.999 EAL: Detected lcore 46 as core 10 on socket 1 00:05:21.999 EAL: Detected lcore 47 as core 11 on socket 1 00:05:21.999 EAL: Detected lcore 48 as core 12 on socket 1 00:05:21.999 EAL: Detected lcore 49 as core 13 on socket 1 00:05:21.999 EAL: Detected lcore 50 as core 14 on socket 1 00:05:21.999 EAL: Detected lcore 51 as core 15 on socket 1 00:05:21.999 EAL: Detected lcore 52 as core 16 on socket 1 00:05:21.999 EAL: Detected lcore 53 as core 17 on socket 1 00:05:21.999 EAL: Detected lcore 54 as core 18 on socket 1 00:05:21.999 EAL: Detected lcore 55 as core 19 on socket 1 00:05:21.999 EAL: Detected lcore 56 as core 20 on socket 1 00:05:21.999 EAL: Detected lcore 57 as core 21 on socket 1 00:05:21.999 EAL: Detected lcore 58 as core 22 on socket 1 00:05:21.999 EAL: Detected lcore 59 as core 23 on socket 1 00:05:21.999 EAL: Detected lcore 60 as core 24 on socket 1 00:05:21.999 EAL: Detected lcore 61 as core 25 on socket 1 00:05:21.999 EAL: Detected lcore 62 as core 26 on socket 1 00:05:21.999 EAL: Detected lcore 63 as core 27 on socket 1 00:05:21.999 EAL: Detected lcore 64 as core 28 on socket 1 00:05:21.999 EAL: Detected lcore 65 as core 29 on socket 1 00:05:21.999 EAL: Detected lcore 66 as core 30 on socket 1 00:05:21.999 EAL: Detected lcore 67 as core 31 on socket 1 00:05:21.999 EAL: Detected lcore 68 as core 32 on socket 1 00:05:21.999 EAL: Detected lcore 69 as core 33 on socket 1 00:05:21.999 EAL: Detected lcore 70 as core 34 on socket 1 00:05:21.999 EAL: Detected lcore 71 as core 35 on socket 1 00:05:22.000 EAL: Detected lcore 72 as core 0 on socket 0 00:05:22.000 EAL: Detected lcore 73 as core 1 on socket 0 00:05:22.000 EAL: Detected lcore 74 as core 2 on socket 0 00:05:22.000 EAL: Detected lcore 75 as core 3 on socket 0 00:05:22.000 EAL: Detected lcore 76 as core 4 on socket 0 00:05:22.000 EAL: Detected lcore 77 as core 5 on socket 0 00:05:22.000 EAL: Detected lcore 78 as core 6 on socket 0 00:05:22.000 EAL: Detected lcore 79 as core 7 on socket 0 00:05:22.000 EAL: Detected lcore 80 as core 8 on socket 0 00:05:22.000 EAL: Detected lcore 81 as core 9 on socket 0 00:05:22.000 EAL: Detected lcore 82 as core 10 on socket 0 00:05:22.000 EAL: Detected lcore 83 as core 11 on socket 0 00:05:22.000 EAL: Detected lcore 84 as core 12 on socket 0 00:05:22.000 EAL: Detected lcore 85 as core 13 on socket 0 00:05:22.000 EAL: Detected lcore 86 as core 14 on socket 0 00:05:22.000 EAL: Detected lcore 87 as core 15 on socket 0 00:05:22.000 EAL: Detected lcore 88 as core 16 on socket 0 00:05:22.000 EAL: Detected lcore 89 as core 17 on socket 0 00:05:22.000 EAL: Detected lcore 90 as core 18 on socket 0 00:05:22.000 EAL: Detected lcore 91 as core 19 on socket 0 00:05:22.000 EAL: Detected lcore 92 as core 20 on socket 0 00:05:22.000 EAL: Detected lcore 93 as core 21 on socket 0 00:05:22.000 EAL: Detected lcore 94 as core 22 on socket 0 00:05:22.000 EAL: Detected lcore 95 as core 23 on socket 0 00:05:22.000 EAL: Detected lcore 96 as core 24 on socket 0 00:05:22.000 EAL: Detected lcore 97 as core 25 on socket 0 00:05:22.000 EAL: Detected lcore 98 as core 26 on socket 0 00:05:22.000 EAL: Detected lcore 99 as core 27 on socket 0 00:05:22.000 EAL: Detected lcore 100 as core 28 on socket 0 00:05:22.000 EAL: Detected lcore 101 as core 29 on socket 0 00:05:22.000 EAL: Detected lcore 102 as core 30 on socket 0 00:05:22.000 EAL: Detected lcore 103 as core 31 on socket 0 00:05:22.000 EAL: Detected lcore 104 as core 32 on socket 0 00:05:22.000 EAL: Detected lcore 105 as core 33 on socket 0 00:05:22.000 EAL: Detected lcore 106 as core 34 on socket 0 00:05:22.000 EAL: Detected lcore 107 as core 35 on socket 0 00:05:22.000 EAL: Detected lcore 108 as core 0 on socket 1 00:05:22.000 EAL: Detected lcore 109 as core 1 on socket 1 00:05:22.000 EAL: Detected lcore 110 as core 2 on socket 1 00:05:22.000 EAL: Detected lcore 111 as core 3 on socket 1 00:05:22.000 EAL: Detected lcore 112 as core 4 on socket 1 00:05:22.000 EAL: Detected lcore 113 as core 5 on socket 1 00:05:22.000 EAL: Detected lcore 114 as core 6 on socket 1 00:05:22.000 EAL: Detected lcore 115 as core 7 on socket 1 00:05:22.000 EAL: Detected lcore 116 as core 8 on socket 1 00:05:22.000 EAL: Detected lcore 117 as core 9 on socket 1 00:05:22.000 EAL: Detected lcore 118 as core 10 on socket 1 00:05:22.000 EAL: Detected lcore 119 as core 11 on socket 1 00:05:22.000 EAL: Detected lcore 120 as core 12 on socket 1 00:05:22.000 EAL: Detected lcore 121 as core 13 on socket 1 00:05:22.000 EAL: Detected lcore 122 as core 14 on socket 1 00:05:22.000 EAL: Detected lcore 123 as core 15 on socket 1 00:05:22.000 EAL: Detected lcore 124 as core 16 on socket 1 00:05:22.000 EAL: Detected lcore 125 as core 17 on socket 1 00:05:22.000 EAL: Detected lcore 126 as core 18 on socket 1 00:05:22.000 EAL: Detected lcore 127 as core 19 on socket 1 00:05:22.000 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:22.000 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:22.000 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:22.000 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:22.000 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:22.000 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:22.000 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:22.000 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:22.000 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:22.000 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:22.000 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:22.000 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:22.000 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:22.000 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:22.000 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:22.000 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:22.000 EAL: Maximum logical cores by configuration: 128 00:05:22.000 EAL: Detected CPU lcores: 128 00:05:22.000 EAL: Detected NUMA nodes: 2 00:05:22.000 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:22.000 EAL: Detected shared linkage of DPDK 00:05:22.000 EAL: No shared files mode enabled, IPC will be disabled 00:05:22.000 EAL: Bus pci wants IOVA as 'DC' 00:05:22.000 EAL: Buses did not request a specific IOVA mode. 00:05:22.000 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:22.000 EAL: Selected IOVA mode 'VA' 00:05:22.000 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.000 EAL: Probing VFIO support... 00:05:22.000 EAL: IOMMU type 1 (Type 1) is supported 00:05:22.000 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:22.000 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:22.000 EAL: VFIO support initialized 00:05:22.000 EAL: Ask a virtual area of 0x2e000 bytes 00:05:22.000 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:22.000 EAL: Setting up physically contiguous memory... 00:05:22.000 EAL: Setting maximum number of open files to 524288 00:05:22.000 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:22.000 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:22.000 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:22.000 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.000 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:22.000 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.000 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.000 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:22.000 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:22.000 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.000 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:22.000 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.000 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.000 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:22.000 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:22.000 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.000 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:22.000 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.000 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.000 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:22.000 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:22.000 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.000 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:22.000 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.000 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.000 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:22.000 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:22.000 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:22.000 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.000 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:22.000 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.000 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.000 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:22.000 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:22.000 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.000 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:22.000 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.000 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.000 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:22.000 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:22.000 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.000 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:22.000 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.000 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.000 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:22.000 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:22.000 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.000 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:22.000 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.000 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.000 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:22.000 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:22.000 EAL: Hugepages will be freed exactly as allocated. 00:05:22.000 EAL: No shared files mode enabled, IPC is disabled 00:05:22.000 EAL: No shared files mode enabled, IPC is disabled 00:05:22.000 EAL: TSC frequency is ~2400000 KHz 00:05:22.000 EAL: Main lcore 0 is ready (tid=7efdc4e74a00;cpuset=[0]) 00:05:22.000 EAL: Trying to obtain current memory policy. 00:05:22.000 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.000 EAL: Restoring previous memory policy: 0 00:05:22.000 EAL: request: mp_malloc_sync 00:05:22.000 EAL: No shared files mode enabled, IPC is disabled 00:05:22.000 EAL: Heap on socket 0 was expanded by 2MB 00:05:22.000 EAL: No shared files mode enabled, IPC is disabled 00:05:22.262 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:22.262 EAL: Mem event callback 'spdk:(nil)' registered 00:05:22.262 00:05:22.262 00:05:22.262 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.262 http://cunit.sourceforge.net/ 00:05:22.262 00:05:22.262 00:05:22.262 Suite: components_suite 00:05:22.262 Test: vtophys_malloc_test ...passed 00:05:22.262 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:22.262 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.262 EAL: Restoring previous memory policy: 4 00:05:22.262 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.262 EAL: request: mp_malloc_sync 00:05:22.262 EAL: No shared files mode enabled, IPC is disabled 00:05:22.262 EAL: Heap on socket 0 was expanded by 4MB 00:05:22.262 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.262 EAL: request: mp_malloc_sync 00:05:22.262 EAL: No shared files mode enabled, IPC is disabled 00:05:22.262 EAL: Heap on socket 0 was shrunk by 4MB 00:05:22.262 EAL: Trying to obtain current memory policy. 00:05:22.262 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.262 EAL: Restoring previous memory policy: 4 00:05:22.262 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.262 EAL: request: mp_malloc_sync 00:05:22.262 EAL: No shared files mode enabled, IPC is disabled 00:05:22.262 EAL: Heap on socket 0 was expanded by 6MB 00:05:22.262 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.262 EAL: request: mp_malloc_sync 00:05:22.262 EAL: No shared files mode enabled, IPC is disabled 00:05:22.262 EAL: Heap on socket 0 was shrunk by 6MB 00:05:22.262 EAL: Trying to obtain current memory policy. 00:05:22.262 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.262 EAL: Restoring previous memory policy: 4 00:05:22.262 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.262 EAL: request: mp_malloc_sync 00:05:22.262 EAL: No shared files mode enabled, IPC is disabled 00:05:22.262 EAL: Heap on socket 0 was expanded by 10MB 00:05:22.262 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.262 EAL: request: mp_malloc_sync 00:05:22.262 EAL: No shared files mode enabled, IPC is disabled 00:05:22.262 EAL: Heap on socket 0 was shrunk by 10MB 00:05:22.262 EAL: Trying to obtain current memory policy. 00:05:22.262 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.262 EAL: Restoring previous memory policy: 4 00:05:22.262 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.262 EAL: request: mp_malloc_sync 00:05:22.262 EAL: No shared files mode enabled, IPC is disabled 00:05:22.262 EAL: Heap on socket 0 was expanded by 18MB 00:05:22.262 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.262 EAL: request: mp_malloc_sync 00:05:22.262 EAL: No shared files mode enabled, IPC is disabled 00:05:22.262 EAL: Heap on socket 0 was shrunk by 18MB 00:05:22.262 EAL: Trying to obtain current memory policy. 00:05:22.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.263 EAL: Restoring previous memory policy: 4 00:05:22.263 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.263 EAL: request: mp_malloc_sync 00:05:22.263 EAL: No shared files mode enabled, IPC is disabled 00:05:22.263 EAL: Heap on socket 0 was expanded by 34MB 00:05:22.263 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.263 EAL: request: mp_malloc_sync 00:05:22.263 EAL: No shared files mode enabled, IPC is disabled 00:05:22.263 EAL: Heap on socket 0 was shrunk by 34MB 00:05:22.263 EAL: Trying to obtain current memory policy. 00:05:22.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.263 EAL: Restoring previous memory policy: 4 00:05:22.263 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.263 EAL: request: mp_malloc_sync 00:05:22.263 EAL: No shared files mode enabled, IPC is disabled 00:05:22.263 EAL: Heap on socket 0 was expanded by 66MB 00:05:22.263 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.263 EAL: request: mp_malloc_sync 00:05:22.263 EAL: No shared files mode enabled, IPC is disabled 00:05:22.263 EAL: Heap on socket 0 was shrunk by 66MB 00:05:22.263 EAL: Trying to obtain current memory policy. 00:05:22.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.263 EAL: Restoring previous memory policy: 4 00:05:22.263 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.263 EAL: request: mp_malloc_sync 00:05:22.263 EAL: No shared files mode enabled, IPC is disabled 00:05:22.263 EAL: Heap on socket 0 was expanded by 130MB 00:05:22.263 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.263 EAL: request: mp_malloc_sync 00:05:22.263 EAL: No shared files mode enabled, IPC is disabled 00:05:22.263 EAL: Heap on socket 0 was shrunk by 130MB 00:05:22.263 EAL: Trying to obtain current memory policy. 00:05:22.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.263 EAL: Restoring previous memory policy: 4 00:05:22.263 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.263 EAL: request: mp_malloc_sync 00:05:22.263 EAL: No shared files mode enabled, IPC is disabled 00:05:22.263 EAL: Heap on socket 0 was expanded by 258MB 00:05:22.263 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.263 EAL: request: mp_malloc_sync 00:05:22.263 EAL: No shared files mode enabled, IPC is disabled 00:05:22.263 EAL: Heap on socket 0 was shrunk by 258MB 00:05:22.263 EAL: Trying to obtain current memory policy. 00:05:22.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.525 EAL: Restoring previous memory policy: 4 00:05:22.525 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.525 EAL: request: mp_malloc_sync 00:05:22.525 EAL: No shared files mode enabled, IPC is disabled 00:05:22.525 EAL: Heap on socket 0 was expanded by 514MB 00:05:22.525 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.525 EAL: request: mp_malloc_sync 00:05:22.525 EAL: No shared files mode enabled, IPC is disabled 00:05:22.525 EAL: Heap on socket 0 was shrunk by 514MB 00:05:22.525 EAL: Trying to obtain current memory policy. 00:05:22.525 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.787 EAL: Restoring previous memory policy: 4 00:05:22.787 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.787 EAL: request: mp_malloc_sync 00:05:22.787 EAL: No shared files mode enabled, IPC is disabled 00:05:22.787 EAL: Heap on socket 0 was expanded by 1026MB 00:05:22.787 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.787 EAL: request: mp_malloc_sync 00:05:22.787 EAL: No shared files mode enabled, IPC is disabled 00:05:22.787 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:22.787 passed 00:05:22.787 00:05:22.787 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.787 suites 1 1 n/a 0 0 00:05:22.787 tests 2 2 2 0 0 00:05:22.787 asserts 497 497 497 0 n/a 00:05:22.787 00:05:22.787 Elapsed time = 0.654 seconds 00:05:22.787 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.787 EAL: request: mp_malloc_sync 00:05:22.787 EAL: No shared files mode enabled, IPC is disabled 00:05:22.787 EAL: Heap on socket 0 was shrunk by 2MB 00:05:22.787 EAL: No shared files mode enabled, IPC is disabled 00:05:22.787 EAL: No shared files mode enabled, IPC is disabled 00:05:22.787 EAL: No shared files mode enabled, IPC is disabled 00:05:22.787 00:05:22.787 real 0m0.809s 00:05:22.787 user 0m0.406s 00:05:22.787 sys 0m0.358s 00:05:22.787 22:32:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.787 22:32:07 -- common/autotest_common.sh@10 -- # set +x 00:05:22.787 ************************************ 00:05:22.787 END TEST env_vtophys 00:05:22.787 ************************************ 00:05:22.787 22:32:07 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:22.787 22:32:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.787 22:32:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.787 22:32:07 -- common/autotest_common.sh@10 -- # set +x 00:05:22.787 ************************************ 00:05:22.787 START TEST env_pci 00:05:22.787 ************************************ 00:05:22.787 22:32:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:23.050 00:05:23.050 00:05:23.050 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.050 http://cunit.sourceforge.net/ 00:05:23.050 00:05:23.050 00:05:23.050 Suite: pci 00:05:23.050 Test: pci_hook ...[2024-04-15 22:32:07.598623] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 889857 has claimed it 00:05:23.050 EAL: Cannot find device (10000:00:01.0) 00:05:23.050 EAL: Failed to attach device on primary process 00:05:23.050 passed 00:05:23.050 00:05:23.050 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.050 suites 1 1 n/a 0 0 00:05:23.050 tests 1 1 1 0 0 00:05:23.050 asserts 25 25 25 0 n/a 00:05:23.050 00:05:23.050 Elapsed time = 0.033 seconds 00:05:23.050 00:05:23.050 real 0m0.053s 00:05:23.050 user 0m0.015s 00:05:23.050 sys 0m0.038s 00:05:23.050 22:32:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.050 22:32:07 -- common/autotest_common.sh@10 -- # set +x 00:05:23.050 ************************************ 00:05:23.050 END TEST env_pci 00:05:23.050 ************************************ 00:05:23.050 22:32:07 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:23.050 22:32:07 -- env/env.sh@15 -- # uname 00:05:23.050 22:32:07 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:23.050 22:32:07 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:23.050 22:32:07 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:23.050 22:32:07 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:23.050 22:32:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.050 22:32:07 -- common/autotest_common.sh@10 -- # set +x 00:05:23.050 ************************************ 00:05:23.050 START TEST env_dpdk_post_init 00:05:23.050 ************************************ 00:05:23.050 22:32:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:23.050 EAL: Detected CPU lcores: 128 00:05:23.050 EAL: Detected NUMA nodes: 2 00:05:23.050 EAL: Detected shared linkage of DPDK 00:05:23.050 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:23.050 EAL: Selected IOVA mode 'VA' 00:05:23.050 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.050 EAL: VFIO support initialized 00:05:23.050 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:23.050 EAL: Using IOMMU type 1 (Type 1) 00:05:23.311 EAL: Ignore mapping IO port bar(1) 00:05:23.311 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:23.573 EAL: Ignore mapping IO port bar(1) 00:05:23.573 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:23.834 EAL: Ignore mapping IO port bar(1) 00:05:23.834 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:23.834 EAL: Ignore mapping IO port bar(1) 00:05:24.098 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:24.098 EAL: Ignore mapping IO port bar(1) 00:05:24.359 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:24.359 EAL: Ignore mapping IO port bar(1) 00:05:24.359 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:24.620 EAL: Ignore mapping IO port bar(1) 00:05:24.620 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:24.881 EAL: Ignore mapping IO port bar(1) 00:05:24.881 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:25.142 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:25.403 EAL: Ignore mapping IO port bar(1) 00:05:25.403 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:25.403 EAL: Ignore mapping IO port bar(1) 00:05:25.671 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:25.671 EAL: Ignore mapping IO port bar(1) 00:05:25.934 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:25.934 EAL: Ignore mapping IO port bar(1) 00:05:25.934 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:26.225 EAL: Ignore mapping IO port bar(1) 00:05:26.225 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:26.499 EAL: Ignore mapping IO port bar(1) 00:05:26.499 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:26.499 EAL: Ignore mapping IO port bar(1) 00:05:26.771 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:26.772 EAL: Ignore mapping IO port bar(1) 00:05:27.032 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:27.032 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:27.032 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:27.032 Starting DPDK initialization... 00:05:27.032 Starting SPDK post initialization... 00:05:27.032 SPDK NVMe probe 00:05:27.032 Attaching to 0000:65:00.0 00:05:27.032 Attached to 0000:65:00.0 00:05:27.032 Cleaning up... 00:05:28.958 00:05:28.958 real 0m5.731s 00:05:28.958 user 0m0.182s 00:05:28.958 sys 0m0.093s 00:05:28.958 22:32:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.958 22:32:13 -- common/autotest_common.sh@10 -- # set +x 00:05:28.958 ************************************ 00:05:28.958 END TEST env_dpdk_post_init 00:05:28.958 ************************************ 00:05:28.958 22:32:13 -- env/env.sh@26 -- # uname 00:05:28.958 22:32:13 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:28.958 22:32:13 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.958 22:32:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:28.958 22:32:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.958 22:32:13 -- common/autotest_common.sh@10 -- # set +x 00:05:28.958 ************************************ 00:05:28.958 START TEST env_mem_callbacks 00:05:28.958 ************************************ 00:05:28.958 22:32:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.958 EAL: Detected CPU lcores: 128 00:05:28.958 EAL: Detected NUMA nodes: 2 00:05:28.958 EAL: Detected shared linkage of DPDK 00:05:28.958 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:28.958 EAL: Selected IOVA mode 'VA' 00:05:28.958 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.958 EAL: VFIO support initialized 00:05:28.958 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:28.958 00:05:28.958 00:05:28.958 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.958 http://cunit.sourceforge.net/ 00:05:28.958 00:05:28.958 00:05:28.958 Suite: memory 00:05:28.958 Test: test ... 00:05:28.958 register 0x200000200000 2097152 00:05:28.958 malloc 3145728 00:05:28.958 register 0x200000400000 4194304 00:05:28.958 buf 0x200000500000 len 3145728 PASSED 00:05:28.958 malloc 64 00:05:28.958 buf 0x2000004fff40 len 64 PASSED 00:05:28.958 malloc 4194304 00:05:28.958 register 0x200000800000 6291456 00:05:28.958 buf 0x200000a00000 len 4194304 PASSED 00:05:28.958 free 0x200000500000 3145728 00:05:28.958 free 0x2000004fff40 64 00:05:28.958 unregister 0x200000400000 4194304 PASSED 00:05:28.958 free 0x200000a00000 4194304 00:05:28.958 unregister 0x200000800000 6291456 PASSED 00:05:28.958 malloc 8388608 00:05:28.958 register 0x200000400000 10485760 00:05:28.958 buf 0x200000600000 len 8388608 PASSED 00:05:28.958 free 0x200000600000 8388608 00:05:28.958 unregister 0x200000400000 10485760 PASSED 00:05:28.958 passed 00:05:28.958 00:05:28.958 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.958 suites 1 1 n/a 0 0 00:05:28.958 tests 1 1 1 0 0 00:05:28.958 asserts 15 15 15 0 n/a 00:05:28.958 00:05:28.958 Elapsed time = 0.005 seconds 00:05:28.958 00:05:28.958 real 0m0.061s 00:05:28.958 user 0m0.021s 00:05:28.958 sys 0m0.040s 00:05:28.958 22:32:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.958 22:32:13 -- common/autotest_common.sh@10 -- # set +x 00:05:28.958 ************************************ 00:05:28.958 END TEST env_mem_callbacks 00:05:28.958 ************************************ 00:05:28.958 00:05:28.958 real 0m7.166s 00:05:28.958 user 0m0.924s 00:05:28.958 sys 0m0.778s 00:05:28.958 22:32:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.958 22:32:13 -- common/autotest_common.sh@10 -- # set +x 00:05:28.958 ************************************ 00:05:28.958 END TEST env 00:05:28.958 ************************************ 00:05:28.958 22:32:13 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:28.959 22:32:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:28.959 22:32:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.959 22:32:13 -- common/autotest_common.sh@10 -- # set +x 00:05:28.959 ************************************ 00:05:28.959 START TEST rpc 00:05:28.959 ************************************ 00:05:28.959 22:32:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:28.959 * Looking for test storage... 00:05:28.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:28.959 22:32:13 -- rpc/rpc.sh@65 -- # spdk_pid=891081 00:05:28.959 22:32:13 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.959 22:32:13 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:28.959 22:32:13 -- rpc/rpc.sh@67 -- # waitforlisten 891081 00:05:28.959 22:32:13 -- common/autotest_common.sh@819 -- # '[' -z 891081 ']' 00:05:28.959 22:32:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.959 22:32:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:28.959 22:32:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.959 22:32:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:28.959 22:32:13 -- common/autotest_common.sh@10 -- # set +x 00:05:28.959 [2024-04-15 22:32:13.741735] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:05:28.959 [2024-04-15 22:32:13.741806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891081 ] 00:05:29.220 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.220 [2024-04-15 22:32:13.812172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.220 [2024-04-15 22:32:13.884087] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:29.220 [2024-04-15 22:32:13.884208] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:29.220 [2024-04-15 22:32:13.884218] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 891081' to capture a snapshot of events at runtime. 00:05:29.220 [2024-04-15 22:32:13.884226] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid891081 for offline analysis/debug. 00:05:29.220 [2024-04-15 22:32:13.884245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.797 22:32:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:29.797 22:32:14 -- common/autotest_common.sh@852 -- # return 0 00:05:29.797 22:32:14 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:29.798 22:32:14 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:29.798 22:32:14 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:29.798 22:32:14 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:29.798 22:32:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.798 22:32:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.798 22:32:14 -- common/autotest_common.sh@10 -- # set +x 00:05:29.798 ************************************ 00:05:29.798 START TEST rpc_integrity 00:05:29.798 ************************************ 00:05:29.798 22:32:14 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:29.798 22:32:14 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:29.798 22:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:29.798 22:32:14 -- common/autotest_common.sh@10 -- # set +x 00:05:29.798 22:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:29.798 22:32:14 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:29.798 22:32:14 -- rpc/rpc.sh@13 -- # jq length 00:05:29.798 22:32:14 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:29.798 22:32:14 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:29.798 22:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:29.798 22:32:14 -- common/autotest_common.sh@10 -- # set +x 00:05:29.798 22:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:29.798 22:32:14 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:29.798 22:32:14 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:29.798 22:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:29.798 22:32:14 -- common/autotest_common.sh@10 -- # set +x 00:05:29.798 22:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:29.798 22:32:14 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:29.798 { 00:05:29.798 "name": "Malloc0", 00:05:29.798 "aliases": [ 00:05:29.798 "68a00560-df34-4941-a3ba-2d7af4ccadde" 00:05:29.798 ], 00:05:29.798 "product_name": "Malloc disk", 00:05:29.798 "block_size": 512, 00:05:29.798 "num_blocks": 16384, 00:05:29.798 "uuid": "68a00560-df34-4941-a3ba-2d7af4ccadde", 00:05:29.798 "assigned_rate_limits": { 00:05:29.798 "rw_ios_per_sec": 0, 00:05:29.798 "rw_mbytes_per_sec": 0, 00:05:29.798 "r_mbytes_per_sec": 0, 00:05:29.798 "w_mbytes_per_sec": 0 00:05:29.798 }, 00:05:29.798 "claimed": false, 00:05:29.798 "zoned": false, 00:05:29.798 "supported_io_types": { 00:05:29.798 "read": true, 00:05:29.798 "write": true, 00:05:29.798 "unmap": true, 00:05:29.798 "write_zeroes": true, 00:05:29.798 "flush": true, 00:05:29.798 "reset": true, 00:05:29.798 "compare": false, 00:05:29.798 "compare_and_write": false, 00:05:29.798 "abort": true, 00:05:29.798 "nvme_admin": false, 00:05:29.798 "nvme_io": false 00:05:29.798 }, 00:05:29.798 "memory_domains": [ 00:05:29.798 { 00:05:29.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.798 "dma_device_type": 2 00:05:29.798 } 00:05:29.798 ], 00:05:29.798 "driver_specific": {} 00:05:29.798 } 00:05:29.798 ]' 00:05:29.798 22:32:14 -- rpc/rpc.sh@17 -- # jq length 00:05:30.059 22:32:14 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.059 22:32:14 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:30.059 22:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.059 22:32:14 -- common/autotest_common.sh@10 -- # set +x 00:05:30.059 [2024-04-15 22:32:14.637521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:30.059 [2024-04-15 22:32:14.637559] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.059 [2024-04-15 22:32:14.637573] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15b0cf0 00:05:30.059 [2024-04-15 22:32:14.637580] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.060 [2024-04-15 22:32:14.638919] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.060 [2024-04-15 22:32:14.638939] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.060 Passthru0 00:05:30.060 22:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.060 22:32:14 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.060 22:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.060 22:32:14 -- common/autotest_common.sh@10 -- # set +x 00:05:30.060 22:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.060 22:32:14 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.060 { 00:05:30.060 "name": "Malloc0", 00:05:30.060 "aliases": [ 00:05:30.060 "68a00560-df34-4941-a3ba-2d7af4ccadde" 00:05:30.060 ], 00:05:30.060 "product_name": "Malloc disk", 00:05:30.060 "block_size": 512, 00:05:30.060 "num_blocks": 16384, 00:05:30.060 "uuid": "68a00560-df34-4941-a3ba-2d7af4ccadde", 00:05:30.060 "assigned_rate_limits": { 00:05:30.060 "rw_ios_per_sec": 0, 00:05:30.060 "rw_mbytes_per_sec": 0, 00:05:30.060 "r_mbytes_per_sec": 0, 00:05:30.060 "w_mbytes_per_sec": 0 00:05:30.060 }, 00:05:30.060 "claimed": true, 00:05:30.060 "claim_type": "exclusive_write", 00:05:30.060 "zoned": false, 00:05:30.060 "supported_io_types": { 00:05:30.060 "read": true, 00:05:30.060 "write": true, 00:05:30.060 "unmap": true, 00:05:30.060 "write_zeroes": true, 00:05:30.060 "flush": true, 00:05:30.060 "reset": true, 00:05:30.060 "compare": false, 00:05:30.060 "compare_and_write": false, 00:05:30.060 "abort": true, 00:05:30.060 "nvme_admin": false, 00:05:30.060 "nvme_io": false 00:05:30.060 }, 00:05:30.060 "memory_domains": [ 00:05:30.060 { 00:05:30.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.060 "dma_device_type": 2 00:05:30.060 } 00:05:30.060 ], 00:05:30.060 "driver_specific": {} 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "name": "Passthru0", 00:05:30.060 "aliases": [ 00:05:30.060 "70921501-7755-593e-b58c-a6a85255b92c" 00:05:30.060 ], 00:05:30.060 "product_name": "passthru", 00:05:30.060 "block_size": 512, 00:05:30.060 "num_blocks": 16384, 00:05:30.060 "uuid": "70921501-7755-593e-b58c-a6a85255b92c", 00:05:30.060 "assigned_rate_limits": { 00:05:30.060 "rw_ios_per_sec": 0, 00:05:30.060 "rw_mbytes_per_sec": 0, 00:05:30.060 "r_mbytes_per_sec": 0, 00:05:30.060 "w_mbytes_per_sec": 0 00:05:30.060 }, 00:05:30.060 "claimed": false, 00:05:30.060 "zoned": false, 00:05:30.060 "supported_io_types": { 00:05:30.060 "read": true, 00:05:30.060 "write": true, 00:05:30.060 "unmap": true, 00:05:30.060 "write_zeroes": true, 00:05:30.060 "flush": true, 00:05:30.060 "reset": true, 00:05:30.060 "compare": false, 00:05:30.060 "compare_and_write": false, 00:05:30.060 "abort": true, 00:05:30.060 "nvme_admin": false, 00:05:30.060 "nvme_io": false 00:05:30.060 }, 00:05:30.060 "memory_domains": [ 00:05:30.060 { 00:05:30.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.060 "dma_device_type": 2 00:05:30.060 } 00:05:30.060 ], 00:05:30.060 "driver_specific": { 00:05:30.060 "passthru": { 00:05:30.060 "name": "Passthru0", 00:05:30.060 "base_bdev_name": "Malloc0" 00:05:30.060 } 00:05:30.060 } 00:05:30.060 } 00:05:30.060 ]' 00:05:30.060 22:32:14 -- rpc/rpc.sh@21 -- # jq length 00:05:30.060 22:32:14 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.060 22:32:14 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.060 22:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.060 22:32:14 -- common/autotest_common.sh@10 -- # set +x 00:05:30.060 22:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.060 22:32:14 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:30.060 22:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.060 22:32:14 -- common/autotest_common.sh@10 -- # set +x 00:05:30.060 22:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.060 22:32:14 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.060 22:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.060 22:32:14 -- common/autotest_common.sh@10 -- # set +x 00:05:30.060 22:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.060 22:32:14 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.060 22:32:14 -- rpc/rpc.sh@26 -- # jq length 00:05:30.060 22:32:14 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.060 00:05:30.060 real 0m0.285s 00:05:30.060 user 0m0.178s 00:05:30.060 sys 0m0.037s 00:05:30.060 22:32:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.060 22:32:14 -- common/autotest_common.sh@10 -- # set +x 00:05:30.060 ************************************ 00:05:30.060 END TEST rpc_integrity 00:05:30.060 ************************************ 00:05:30.060 22:32:14 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:30.060 22:32:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.060 22:32:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.060 22:32:14 -- common/autotest_common.sh@10 -- # set +x 00:05:30.060 ************************************ 00:05:30.060 START TEST rpc_plugins 00:05:30.060 ************************************ 00:05:30.060 22:32:14 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:30.060 22:32:14 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:30.060 22:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.060 22:32:14 -- common/autotest_common.sh@10 -- # set +x 00:05:30.060 22:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.060 22:32:14 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:30.060 22:32:14 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:30.060 22:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.060 22:32:14 -- common/autotest_common.sh@10 -- # set +x 00:05:30.060 22:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.322 22:32:14 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:30.322 { 00:05:30.322 "name": "Malloc1", 00:05:30.322 "aliases": [ 00:05:30.323 "eea4664b-3a4b-46e6-b25f-5dc204717f22" 00:05:30.323 ], 00:05:30.323 "product_name": "Malloc disk", 00:05:30.323 "block_size": 4096, 00:05:30.323 "num_blocks": 256, 00:05:30.323 "uuid": "eea4664b-3a4b-46e6-b25f-5dc204717f22", 00:05:30.323 "assigned_rate_limits": { 00:05:30.323 "rw_ios_per_sec": 0, 00:05:30.323 "rw_mbytes_per_sec": 0, 00:05:30.323 "r_mbytes_per_sec": 0, 00:05:30.323 "w_mbytes_per_sec": 0 00:05:30.323 }, 00:05:30.323 "claimed": false, 00:05:30.323 "zoned": false, 00:05:30.323 "supported_io_types": { 00:05:30.323 "read": true, 00:05:30.323 "write": true, 00:05:30.323 "unmap": true, 00:05:30.323 "write_zeroes": true, 00:05:30.323 "flush": true, 00:05:30.323 "reset": true, 00:05:30.323 "compare": false, 00:05:30.323 "compare_and_write": false, 00:05:30.323 "abort": true, 00:05:30.323 "nvme_admin": false, 00:05:30.323 "nvme_io": false 00:05:30.323 }, 00:05:30.323 "memory_domains": [ 00:05:30.323 { 00:05:30.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.323 "dma_device_type": 2 00:05:30.323 } 00:05:30.323 ], 00:05:30.323 "driver_specific": {} 00:05:30.323 } 00:05:30.323 ]' 00:05:30.323 22:32:14 -- rpc/rpc.sh@32 -- # jq length 00:05:30.323 22:32:14 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:30.323 22:32:14 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:30.323 22:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.323 22:32:14 -- common/autotest_common.sh@10 -- # set +x 00:05:30.323 22:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.323 22:32:14 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:30.323 22:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.323 22:32:14 -- common/autotest_common.sh@10 -- # set +x 00:05:30.323 22:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.323 22:32:14 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:30.323 22:32:14 -- rpc/rpc.sh@36 -- # jq length 00:05:30.323 22:32:14 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:30.323 00:05:30.323 real 0m0.143s 00:05:30.323 user 0m0.084s 00:05:30.323 sys 0m0.020s 00:05:30.323 22:32:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.323 22:32:14 -- common/autotest_common.sh@10 -- # set +x 00:05:30.323 ************************************ 00:05:30.323 END TEST rpc_plugins 00:05:30.323 ************************************ 00:05:30.323 22:32:15 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:30.323 22:32:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.323 22:32:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.323 22:32:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.323 ************************************ 00:05:30.323 START TEST rpc_trace_cmd_test 00:05:30.323 ************************************ 00:05:30.323 22:32:15 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:30.323 22:32:15 -- rpc/rpc.sh@40 -- # local info 00:05:30.323 22:32:15 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:30.323 22:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.323 22:32:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.323 22:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.323 22:32:15 -- rpc/rpc.sh@42 -- # info='{ 00:05:30.323 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid891081", 00:05:30.323 "tpoint_group_mask": "0x8", 00:05:30.323 "iscsi_conn": { 00:05:30.323 "mask": "0x2", 00:05:30.323 "tpoint_mask": "0x0" 00:05:30.323 }, 00:05:30.323 "scsi": { 00:05:30.323 "mask": "0x4", 00:05:30.323 "tpoint_mask": "0x0" 00:05:30.323 }, 00:05:30.323 "bdev": { 00:05:30.323 "mask": "0x8", 00:05:30.323 "tpoint_mask": "0xffffffffffffffff" 00:05:30.323 }, 00:05:30.323 "nvmf_rdma": { 00:05:30.323 "mask": "0x10", 00:05:30.323 "tpoint_mask": "0x0" 00:05:30.323 }, 00:05:30.323 "nvmf_tcp": { 00:05:30.323 "mask": "0x20", 00:05:30.323 "tpoint_mask": "0x0" 00:05:30.323 }, 00:05:30.323 "ftl": { 00:05:30.323 "mask": "0x40", 00:05:30.323 "tpoint_mask": "0x0" 00:05:30.323 }, 00:05:30.323 "blobfs": { 00:05:30.323 "mask": "0x80", 00:05:30.323 "tpoint_mask": "0x0" 00:05:30.323 }, 00:05:30.323 "dsa": { 00:05:30.323 "mask": "0x200", 00:05:30.323 "tpoint_mask": "0x0" 00:05:30.323 }, 00:05:30.323 "thread": { 00:05:30.323 "mask": "0x400", 00:05:30.323 "tpoint_mask": "0x0" 00:05:30.323 }, 00:05:30.323 "nvme_pcie": { 00:05:30.323 "mask": "0x800", 00:05:30.323 "tpoint_mask": "0x0" 00:05:30.323 }, 00:05:30.323 "iaa": { 00:05:30.323 "mask": "0x1000", 00:05:30.323 "tpoint_mask": "0x0" 00:05:30.323 }, 00:05:30.323 "nvme_tcp": { 00:05:30.323 "mask": "0x2000", 00:05:30.323 "tpoint_mask": "0x0" 00:05:30.323 }, 00:05:30.323 "bdev_nvme": { 00:05:30.323 "mask": "0x4000", 00:05:30.323 "tpoint_mask": "0x0" 00:05:30.323 } 00:05:30.323 }' 00:05:30.323 22:32:15 -- rpc/rpc.sh@43 -- # jq length 00:05:30.323 22:32:15 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:30.323 22:32:15 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:30.323 22:32:15 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:30.584 22:32:15 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:30.584 22:32:15 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:30.584 22:32:15 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:30.584 22:32:15 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:30.584 22:32:15 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:30.584 22:32:15 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:30.584 00:05:30.584 real 0m0.246s 00:05:30.584 user 0m0.207s 00:05:30.584 sys 0m0.030s 00:05:30.584 22:32:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.584 22:32:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.584 ************************************ 00:05:30.584 END TEST rpc_trace_cmd_test 00:05:30.584 ************************************ 00:05:30.584 22:32:15 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:30.584 22:32:15 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:30.584 22:32:15 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:30.584 22:32:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.584 22:32:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.584 22:32:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.584 ************************************ 00:05:30.584 START TEST rpc_daemon_integrity 00:05:30.584 ************************************ 00:05:30.584 22:32:15 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:30.584 22:32:15 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.584 22:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.584 22:32:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.584 22:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.584 22:32:15 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.584 22:32:15 -- rpc/rpc.sh@13 -- # jq length 00:05:30.584 22:32:15 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.584 22:32:15 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.584 22:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.584 22:32:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.584 22:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.584 22:32:15 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:30.584 22:32:15 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.584 22:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.584 22:32:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.846 22:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.846 22:32:15 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.846 { 00:05:30.846 "name": "Malloc2", 00:05:30.846 "aliases": [ 00:05:30.846 "b110cbf2-4f8a-4a76-88a5-396abc6b547d" 00:05:30.846 ], 00:05:30.846 "product_name": "Malloc disk", 00:05:30.846 "block_size": 512, 00:05:30.846 "num_blocks": 16384, 00:05:30.846 "uuid": "b110cbf2-4f8a-4a76-88a5-396abc6b547d", 00:05:30.846 "assigned_rate_limits": { 00:05:30.846 "rw_ios_per_sec": 0, 00:05:30.846 "rw_mbytes_per_sec": 0, 00:05:30.846 "r_mbytes_per_sec": 0, 00:05:30.846 "w_mbytes_per_sec": 0 00:05:30.846 }, 00:05:30.846 "claimed": false, 00:05:30.846 "zoned": false, 00:05:30.846 "supported_io_types": { 00:05:30.846 "read": true, 00:05:30.846 "write": true, 00:05:30.846 "unmap": true, 00:05:30.846 "write_zeroes": true, 00:05:30.846 "flush": true, 00:05:30.846 "reset": true, 00:05:30.846 "compare": false, 00:05:30.846 "compare_and_write": false, 00:05:30.846 "abort": true, 00:05:30.846 "nvme_admin": false, 00:05:30.846 "nvme_io": false 00:05:30.846 }, 00:05:30.846 "memory_domains": [ 00:05:30.846 { 00:05:30.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.846 "dma_device_type": 2 00:05:30.846 } 00:05:30.846 ], 00:05:30.846 "driver_specific": {} 00:05:30.846 } 00:05:30.846 ]' 00:05:30.846 22:32:15 -- rpc/rpc.sh@17 -- # jq length 00:05:30.846 22:32:15 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.846 22:32:15 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:30.846 22:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.846 22:32:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.846 [2024-04-15 22:32:15.443709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:30.846 [2024-04-15 22:32:15.443742] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.846 [2024-04-15 22:32:15.443756] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15b1720 00:05:30.846 [2024-04-15 22:32:15.443763] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.846 [2024-04-15 22:32:15.444969] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.846 [2024-04-15 22:32:15.444988] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.846 Passthru0 00:05:30.846 22:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.846 22:32:15 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.846 22:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.846 22:32:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.846 22:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.846 22:32:15 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.846 { 00:05:30.846 "name": "Malloc2", 00:05:30.846 "aliases": [ 00:05:30.846 "b110cbf2-4f8a-4a76-88a5-396abc6b547d" 00:05:30.846 ], 00:05:30.847 "product_name": "Malloc disk", 00:05:30.847 "block_size": 512, 00:05:30.847 "num_blocks": 16384, 00:05:30.847 "uuid": "b110cbf2-4f8a-4a76-88a5-396abc6b547d", 00:05:30.847 "assigned_rate_limits": { 00:05:30.847 "rw_ios_per_sec": 0, 00:05:30.847 "rw_mbytes_per_sec": 0, 00:05:30.847 "r_mbytes_per_sec": 0, 00:05:30.847 "w_mbytes_per_sec": 0 00:05:30.847 }, 00:05:30.847 "claimed": true, 00:05:30.847 "claim_type": "exclusive_write", 00:05:30.847 "zoned": false, 00:05:30.847 "supported_io_types": { 00:05:30.847 "read": true, 00:05:30.847 "write": true, 00:05:30.847 "unmap": true, 00:05:30.847 "write_zeroes": true, 00:05:30.847 "flush": true, 00:05:30.847 "reset": true, 00:05:30.847 "compare": false, 00:05:30.847 "compare_and_write": false, 00:05:30.847 "abort": true, 00:05:30.847 "nvme_admin": false, 00:05:30.847 "nvme_io": false 00:05:30.847 }, 00:05:30.847 "memory_domains": [ 00:05:30.847 { 00:05:30.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.847 "dma_device_type": 2 00:05:30.847 } 00:05:30.847 ], 00:05:30.847 "driver_specific": {} 00:05:30.847 }, 00:05:30.847 { 00:05:30.847 "name": "Passthru0", 00:05:30.847 "aliases": [ 00:05:30.847 "ab674b77-c2bc-502c-9040-e46bf15663c3" 00:05:30.847 ], 00:05:30.847 "product_name": "passthru", 00:05:30.847 "block_size": 512, 00:05:30.847 "num_blocks": 16384, 00:05:30.847 "uuid": "ab674b77-c2bc-502c-9040-e46bf15663c3", 00:05:30.847 "assigned_rate_limits": { 00:05:30.847 "rw_ios_per_sec": 0, 00:05:30.847 "rw_mbytes_per_sec": 0, 00:05:30.847 "r_mbytes_per_sec": 0, 00:05:30.847 "w_mbytes_per_sec": 0 00:05:30.847 }, 00:05:30.847 "claimed": false, 00:05:30.847 "zoned": false, 00:05:30.847 "supported_io_types": { 00:05:30.847 "read": true, 00:05:30.847 "write": true, 00:05:30.847 "unmap": true, 00:05:30.847 "write_zeroes": true, 00:05:30.847 "flush": true, 00:05:30.847 "reset": true, 00:05:30.847 "compare": false, 00:05:30.847 "compare_and_write": false, 00:05:30.847 "abort": true, 00:05:30.847 "nvme_admin": false, 00:05:30.847 "nvme_io": false 00:05:30.847 }, 00:05:30.847 "memory_domains": [ 00:05:30.847 { 00:05:30.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.847 "dma_device_type": 2 00:05:30.847 } 00:05:30.847 ], 00:05:30.847 "driver_specific": { 00:05:30.847 "passthru": { 00:05:30.847 "name": "Passthru0", 00:05:30.847 "base_bdev_name": "Malloc2" 00:05:30.847 } 00:05:30.847 } 00:05:30.847 } 00:05:30.847 ]' 00:05:30.847 22:32:15 -- rpc/rpc.sh@21 -- # jq length 00:05:30.847 22:32:15 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.847 22:32:15 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.847 22:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.847 22:32:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.847 22:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.847 22:32:15 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:30.847 22:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.847 22:32:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.847 22:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.847 22:32:15 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.847 22:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.847 22:32:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.847 22:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.847 22:32:15 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.847 22:32:15 -- rpc/rpc.sh@26 -- # jq length 00:05:30.847 22:32:15 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.847 00:05:30.847 real 0m0.286s 00:05:30.847 user 0m0.188s 00:05:30.847 sys 0m0.036s 00:05:30.847 22:32:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.847 22:32:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.847 ************************************ 00:05:30.847 END TEST rpc_daemon_integrity 00:05:30.847 ************************************ 00:05:30.847 22:32:15 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:30.847 22:32:15 -- rpc/rpc.sh@84 -- # killprocess 891081 00:05:30.847 22:32:15 -- common/autotest_common.sh@926 -- # '[' -z 891081 ']' 00:05:30.847 22:32:15 -- common/autotest_common.sh@930 -- # kill -0 891081 00:05:30.847 22:32:15 -- common/autotest_common.sh@931 -- # uname 00:05:30.847 22:32:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:30.847 22:32:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 891081 00:05:31.109 22:32:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:31.109 22:32:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:31.109 22:32:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 891081' 00:05:31.109 killing process with pid 891081 00:05:31.109 22:32:15 -- common/autotest_common.sh@945 -- # kill 891081 00:05:31.109 22:32:15 -- common/autotest_common.sh@950 -- # wait 891081 00:05:31.109 00:05:31.109 real 0m2.296s 00:05:31.109 user 0m2.999s 00:05:31.109 sys 0m0.611s 00:05:31.109 22:32:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.109 22:32:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.109 ************************************ 00:05:31.109 END TEST rpc 00:05:31.109 ************************************ 00:05:31.371 22:32:15 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:31.371 22:32:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.371 22:32:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.371 22:32:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.371 ************************************ 00:05:31.371 START TEST rpc_client 00:05:31.371 ************************************ 00:05:31.371 22:32:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:31.371 * Looking for test storage... 00:05:31.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:31.371 22:32:16 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:31.371 OK 00:05:31.371 22:32:16 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:31.371 00:05:31.371 real 0m0.118s 00:05:31.371 user 0m0.049s 00:05:31.371 sys 0m0.077s 00:05:31.371 22:32:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.371 22:32:16 -- common/autotest_common.sh@10 -- # set +x 00:05:31.371 ************************************ 00:05:31.371 END TEST rpc_client 00:05:31.371 ************************************ 00:05:31.371 22:32:16 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:31.371 22:32:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.371 22:32:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.371 22:32:16 -- common/autotest_common.sh@10 -- # set +x 00:05:31.371 ************************************ 00:05:31.371 START TEST json_config 00:05:31.371 ************************************ 00:05:31.371 22:32:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:31.371 22:32:16 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:31.371 22:32:16 -- nvmf/common.sh@7 -- # uname -s 00:05:31.371 22:32:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.371 22:32:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.371 22:32:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.371 22:32:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.371 22:32:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.371 22:32:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.371 22:32:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.371 22:32:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.371 22:32:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.371 22:32:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.371 22:32:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:31.371 22:32:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:31.371 22:32:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.371 22:32:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.371 22:32:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.371 22:32:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:31.371 22:32:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.371 22:32:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.371 22:32:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.371 22:32:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.371 22:32:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.371 22:32:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.371 22:32:16 -- paths/export.sh@5 -- # export PATH 00:05:31.371 22:32:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.371 22:32:16 -- nvmf/common.sh@46 -- # : 0 00:05:31.371 22:32:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:31.371 22:32:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:31.371 22:32:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:31.371 22:32:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.371 22:32:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.371 22:32:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:31.371 22:32:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:31.371 22:32:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:31.633 22:32:16 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:31.633 22:32:16 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:31.633 22:32:16 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:31.633 22:32:16 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:31.633 22:32:16 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:31.633 22:32:16 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:31.633 22:32:16 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:31.633 22:32:16 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:31.633 22:32:16 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:31.633 22:32:16 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:31.633 22:32:16 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:31.633 22:32:16 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:31.633 22:32:16 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:31.633 22:32:16 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:31.633 22:32:16 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:31.633 INFO: JSON configuration test init 00:05:31.633 22:32:16 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:31.633 22:32:16 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:31.633 22:32:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:31.633 22:32:16 -- common/autotest_common.sh@10 -- # set +x 00:05:31.633 22:32:16 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:31.633 22:32:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:31.633 22:32:16 -- common/autotest_common.sh@10 -- # set +x 00:05:31.633 22:32:16 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:31.633 22:32:16 -- json_config/json_config.sh@98 -- # local app=target 00:05:31.633 22:32:16 -- json_config/json_config.sh@99 -- # shift 00:05:31.633 22:32:16 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:31.633 22:32:16 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:31.633 22:32:16 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:31.633 22:32:16 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:31.633 22:32:16 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:31.633 22:32:16 -- json_config/json_config.sh@111 -- # app_pid[$app]=891859 00:05:31.633 22:32:16 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:31.633 Waiting for target to run... 00:05:31.633 22:32:16 -- json_config/json_config.sh@114 -- # waitforlisten 891859 /var/tmp/spdk_tgt.sock 00:05:31.633 22:32:16 -- common/autotest_common.sh@819 -- # '[' -z 891859 ']' 00:05:31.633 22:32:16 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:31.633 22:32:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.633 22:32:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:31.633 22:32:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.633 22:32:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:31.633 22:32:16 -- common/autotest_common.sh@10 -- # set +x 00:05:31.633 [2024-04-15 22:32:16.260885] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:05:31.633 [2024-04-15 22:32:16.260958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891859 ] 00:05:31.633 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.896 [2024-04-15 22:32:16.567009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.896 [2024-04-15 22:32:16.622252] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.896 [2024-04-15 22:32:16.622388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.468 22:32:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:32.468 22:32:17 -- common/autotest_common.sh@852 -- # return 0 00:05:32.468 22:32:17 -- json_config/json_config.sh@115 -- # echo '' 00:05:32.468 00:05:32.468 22:32:17 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:32.468 22:32:17 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:32.468 22:32:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:32.468 22:32:17 -- common/autotest_common.sh@10 -- # set +x 00:05:32.468 22:32:17 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:32.468 22:32:17 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:32.468 22:32:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:32.468 22:32:17 -- common/autotest_common.sh@10 -- # set +x 00:05:32.468 22:32:17 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:32.468 22:32:17 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:32.468 22:32:17 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:33.041 22:32:17 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:33.041 22:32:17 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:33.041 22:32:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:33.041 22:32:17 -- common/autotest_common.sh@10 -- # set +x 00:05:33.041 22:32:17 -- json_config/json_config.sh@48 -- # local ret=0 00:05:33.041 22:32:17 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:33.041 22:32:17 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:33.041 22:32:17 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:33.041 22:32:17 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:33.041 22:32:17 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:33.041 22:32:17 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:33.041 22:32:17 -- json_config/json_config.sh@51 -- # local get_types 00:05:33.041 22:32:17 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:33.041 22:32:17 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:33.041 22:32:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:33.041 22:32:17 -- common/autotest_common.sh@10 -- # set +x 00:05:33.041 22:32:17 -- json_config/json_config.sh@58 -- # return 0 00:05:33.041 22:32:17 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:33.041 22:32:17 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:33.041 22:32:17 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:33.041 22:32:17 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:33.041 22:32:17 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:33.041 22:32:17 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:33.041 22:32:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:33.041 22:32:17 -- common/autotest_common.sh@10 -- # set +x 00:05:33.041 22:32:17 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:33.041 22:32:17 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:33.041 22:32:17 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:33.041 22:32:17 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.041 22:32:17 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.302 MallocForNvmf0 00:05:33.302 22:32:17 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.303 22:32:17 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.303 MallocForNvmf1 00:05:33.303 22:32:18 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.303 22:32:18 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.564 [2024-04-15 22:32:18.239388] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.564 22:32:18 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.564 22:32:18 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.825 22:32:18 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:33.825 22:32:18 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:33.825 22:32:18 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:33.825 22:32:18 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.087 22:32:18 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:34.087 22:32:18 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:34.087 [2024-04-15 22:32:18.825369] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:34.087 22:32:18 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:34.087 22:32:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:34.087 22:32:18 -- common/autotest_common.sh@10 -- # set +x 00:05:34.087 22:32:18 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:34.087 22:32:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:34.087 22:32:18 -- common/autotest_common.sh@10 -- # set +x 00:05:34.348 22:32:18 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:34.348 22:32:18 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:34.348 22:32:18 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:34.348 MallocBdevForConfigChangeCheck 00:05:34.348 22:32:19 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:34.348 22:32:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:34.348 22:32:19 -- common/autotest_common.sh@10 -- # set +x 00:05:34.348 22:32:19 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:34.348 22:32:19 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.609 22:32:19 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:34.609 INFO: shutting down applications... 00:05:34.609 22:32:19 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:34.609 22:32:19 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:34.609 22:32:19 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:34.609 22:32:19 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:35.183 Calling clear_iscsi_subsystem 00:05:35.183 Calling clear_nvmf_subsystem 00:05:35.183 Calling clear_nbd_subsystem 00:05:35.183 Calling clear_ublk_subsystem 00:05:35.183 Calling clear_vhost_blk_subsystem 00:05:35.183 Calling clear_vhost_scsi_subsystem 00:05:35.183 Calling clear_scheduler_subsystem 00:05:35.183 Calling clear_bdev_subsystem 00:05:35.183 Calling clear_accel_subsystem 00:05:35.183 Calling clear_vmd_subsystem 00:05:35.183 Calling clear_sock_subsystem 00:05:35.183 Calling clear_iobuf_subsystem 00:05:35.183 22:32:19 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:35.183 22:32:19 -- json_config/json_config.sh@396 -- # count=100 00:05:35.183 22:32:19 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:35.183 22:32:19 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.183 22:32:19 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:35.183 22:32:19 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:35.445 22:32:20 -- json_config/json_config.sh@398 -- # break 00:05:35.445 22:32:20 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:35.445 22:32:20 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:35.445 22:32:20 -- json_config/json_config.sh@120 -- # local app=target 00:05:35.445 22:32:20 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:35.445 22:32:20 -- json_config/json_config.sh@124 -- # [[ -n 891859 ]] 00:05:35.445 22:32:20 -- json_config/json_config.sh@127 -- # kill -SIGINT 891859 00:05:35.445 22:32:20 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:35.445 22:32:20 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:35.445 22:32:20 -- json_config/json_config.sh@130 -- # kill -0 891859 00:05:35.445 22:32:20 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:36.017 22:32:20 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:36.017 22:32:20 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:36.018 22:32:20 -- json_config/json_config.sh@130 -- # kill -0 891859 00:05:36.018 22:32:20 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:36.018 22:32:20 -- json_config/json_config.sh@132 -- # break 00:05:36.018 22:32:20 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:36.018 22:32:20 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:36.018 SPDK target shutdown done 00:05:36.018 22:32:20 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:36.018 INFO: relaunching applications... 00:05:36.018 22:32:20 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.018 22:32:20 -- json_config/json_config.sh@98 -- # local app=target 00:05:36.018 22:32:20 -- json_config/json_config.sh@99 -- # shift 00:05:36.018 22:32:20 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:36.018 22:32:20 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:36.018 22:32:20 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:36.018 22:32:20 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:36.018 22:32:20 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:36.018 22:32:20 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.018 22:32:20 -- json_config/json_config.sh@111 -- # app_pid[$app]=892918 00:05:36.018 22:32:20 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:36.018 Waiting for target to run... 00:05:36.018 22:32:20 -- json_config/json_config.sh@114 -- # waitforlisten 892918 /var/tmp/spdk_tgt.sock 00:05:36.018 22:32:20 -- common/autotest_common.sh@819 -- # '[' -z 892918 ']' 00:05:36.018 22:32:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.018 22:32:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:36.018 22:32:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.018 22:32:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:36.018 22:32:20 -- common/autotest_common.sh@10 -- # set +x 00:05:36.018 [2024-04-15 22:32:20.602397] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:05:36.018 [2024-04-15 22:32:20.602459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid892918 ] 00:05:36.018 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.279 [2024-04-15 22:32:20.919606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.279 [2024-04-15 22:32:20.969061] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.279 [2024-04-15 22:32:20.969181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.851 [2024-04-15 22:32:21.456795] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:36.851 [2024-04-15 22:32:21.489188] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:37.423 22:32:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:37.423 22:32:21 -- common/autotest_common.sh@852 -- # return 0 00:05:37.423 22:32:21 -- json_config/json_config.sh@115 -- # echo '' 00:05:37.423 00:05:37.423 22:32:21 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:37.423 22:32:21 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:37.423 INFO: Checking if target configuration is the same... 00:05:37.423 22:32:21 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.423 22:32:21 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:37.423 22:32:21 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.423 + '[' 2 -ne 2 ']' 00:05:37.423 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:37.423 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:37.424 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:37.424 +++ basename /dev/fd/62 00:05:37.424 ++ mktemp /tmp/62.XXX 00:05:37.424 + tmp_file_1=/tmp/62.lE9 00:05:37.424 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.424 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:37.424 + tmp_file_2=/tmp/spdk_tgt_config.json.tC8 00:05:37.424 + ret=0 00:05:37.424 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:37.684 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:37.684 + diff -u /tmp/62.lE9 /tmp/spdk_tgt_config.json.tC8 00:05:37.684 + echo 'INFO: JSON config files are the same' 00:05:37.684 INFO: JSON config files are the same 00:05:37.684 + rm /tmp/62.lE9 /tmp/spdk_tgt_config.json.tC8 00:05:37.684 + exit 0 00:05:37.684 22:32:22 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:37.684 22:32:22 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:37.684 INFO: changing configuration and checking if this can be detected... 00:05:37.684 22:32:22 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:37.684 22:32:22 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:37.684 22:32:22 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:37.684 22:32:22 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.684 22:32:22 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.684 + '[' 2 -ne 2 ']' 00:05:37.684 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:37.684 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:37.684 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:37.684 +++ basename /dev/fd/62 00:05:37.684 ++ mktemp /tmp/62.XXX 00:05:37.684 + tmp_file_1=/tmp/62.uBm 00:05:37.944 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.944 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:37.944 + tmp_file_2=/tmp/spdk_tgt_config.json.8Rn 00:05:37.944 + ret=0 00:05:37.944 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:37.944 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.205 + diff -u /tmp/62.uBm /tmp/spdk_tgt_config.json.8Rn 00:05:38.205 + ret=1 00:05:38.205 + echo '=== Start of file: /tmp/62.uBm ===' 00:05:38.205 + cat /tmp/62.uBm 00:05:38.205 + echo '=== End of file: /tmp/62.uBm ===' 00:05:38.205 + echo '' 00:05:38.205 + echo '=== Start of file: /tmp/spdk_tgt_config.json.8Rn ===' 00:05:38.205 + cat /tmp/spdk_tgt_config.json.8Rn 00:05:38.205 + echo '=== End of file: /tmp/spdk_tgt_config.json.8Rn ===' 00:05:38.205 + echo '' 00:05:38.205 + rm /tmp/62.uBm /tmp/spdk_tgt_config.json.8Rn 00:05:38.205 + exit 1 00:05:38.205 22:32:22 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:38.205 INFO: configuration change detected. 00:05:38.205 22:32:22 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:38.205 22:32:22 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:38.205 22:32:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:38.205 22:32:22 -- common/autotest_common.sh@10 -- # set +x 00:05:38.205 22:32:22 -- json_config/json_config.sh@360 -- # local ret=0 00:05:38.205 22:32:22 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:38.205 22:32:22 -- json_config/json_config.sh@370 -- # [[ -n 892918 ]] 00:05:38.205 22:32:22 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:38.205 22:32:22 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:38.205 22:32:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:38.205 22:32:22 -- common/autotest_common.sh@10 -- # set +x 00:05:38.205 22:32:22 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:38.205 22:32:22 -- json_config/json_config.sh@246 -- # uname -s 00:05:38.205 22:32:22 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:38.205 22:32:22 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:38.205 22:32:22 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:38.205 22:32:22 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:38.205 22:32:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:38.205 22:32:22 -- common/autotest_common.sh@10 -- # set +x 00:05:38.205 22:32:22 -- json_config/json_config.sh@376 -- # killprocess 892918 00:05:38.205 22:32:22 -- common/autotest_common.sh@926 -- # '[' -z 892918 ']' 00:05:38.205 22:32:22 -- common/autotest_common.sh@930 -- # kill -0 892918 00:05:38.205 22:32:22 -- common/autotest_common.sh@931 -- # uname 00:05:38.205 22:32:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:38.205 22:32:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 892918 00:05:38.205 22:32:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:38.205 22:32:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:38.205 22:32:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 892918' 00:05:38.205 killing process with pid 892918 00:05:38.205 22:32:22 -- common/autotest_common.sh@945 -- # kill 892918 00:05:38.205 22:32:22 -- common/autotest_common.sh@950 -- # wait 892918 00:05:38.466 22:32:23 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.466 22:32:23 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:38.466 22:32:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:38.466 22:32:23 -- common/autotest_common.sh@10 -- # set +x 00:05:38.466 22:32:23 -- json_config/json_config.sh@381 -- # return 0 00:05:38.466 22:32:23 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:38.466 INFO: Success 00:05:38.466 00:05:38.466 real 0m7.141s 00:05:38.466 user 0m8.471s 00:05:38.466 sys 0m1.737s 00:05:38.466 22:32:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.466 22:32:23 -- common/autotest_common.sh@10 -- # set +x 00:05:38.466 ************************************ 00:05:38.466 END TEST json_config 00:05:38.466 ************************************ 00:05:38.466 22:32:23 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:38.466 22:32:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.466 22:32:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.466 22:32:23 -- common/autotest_common.sh@10 -- # set +x 00:05:38.466 ************************************ 00:05:38.466 START TEST json_config_extra_key 00:05:38.466 ************************************ 00:05:38.466 22:32:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:38.728 22:32:23 -- nvmf/common.sh@7 -- # uname -s 00:05:38.728 22:32:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:38.728 22:32:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:38.728 22:32:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:38.728 22:32:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:38.728 22:32:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:38.728 22:32:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:38.728 22:32:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:38.728 22:32:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:38.728 22:32:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:38.728 22:32:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:38.728 22:32:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:38.728 22:32:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:38.728 22:32:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:38.728 22:32:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:38.728 22:32:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:38.728 22:32:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:38.728 22:32:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.728 22:32:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.728 22:32:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.728 22:32:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.728 22:32:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.728 22:32:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.728 22:32:23 -- paths/export.sh@5 -- # export PATH 00:05:38.728 22:32:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.728 22:32:23 -- nvmf/common.sh@46 -- # : 0 00:05:38.728 22:32:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:38.728 22:32:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:38.728 22:32:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:38.728 22:32:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.728 22:32:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.728 22:32:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:38.728 22:32:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:38.728 22:32:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:38.728 INFO: launching applications... 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=893478 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:38.728 Waiting for target to run... 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 893478 /var/tmp/spdk_tgt.sock 00:05:38.728 22:32:23 -- common/autotest_common.sh@819 -- # '[' -z 893478 ']' 00:05:38.728 22:32:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:38.728 22:32:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:38.728 22:32:23 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:38.728 22:32:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:38.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:38.728 22:32:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:38.728 22:32:23 -- common/autotest_common.sh@10 -- # set +x 00:05:38.728 [2024-04-15 22:32:23.423188] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:05:38.728 [2024-04-15 22:32:23.423263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid893478 ] 00:05:38.728 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.989 [2024-04-15 22:32:23.738123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.989 [2024-04-15 22:32:23.794238] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:38.989 [2024-04-15 22:32:23.794380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.562 22:32:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:39.562 22:32:24 -- common/autotest_common.sh@852 -- # return 0 00:05:39.562 22:32:24 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:39.562 00:05:39.562 22:32:24 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:39.562 INFO: shutting down applications... 00:05:39.562 22:32:24 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:39.562 22:32:24 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:39.562 22:32:24 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:39.562 22:32:24 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 893478 ]] 00:05:39.562 22:32:24 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 893478 00:05:39.562 22:32:24 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:39.562 22:32:24 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:39.562 22:32:24 -- json_config/json_config_extra_key.sh@50 -- # kill -0 893478 00:05:39.562 22:32:24 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:40.134 22:32:24 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:40.134 22:32:24 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:40.134 22:32:24 -- json_config/json_config_extra_key.sh@50 -- # kill -0 893478 00:05:40.134 22:32:24 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:40.134 22:32:24 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:40.134 22:32:24 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:40.134 22:32:24 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:40.134 SPDK target shutdown done 00:05:40.134 22:32:24 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:40.134 Success 00:05:40.134 00:05:40.134 real 0m1.424s 00:05:40.134 user 0m1.025s 00:05:40.134 sys 0m0.419s 00:05:40.134 22:32:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.134 22:32:24 -- common/autotest_common.sh@10 -- # set +x 00:05:40.134 ************************************ 00:05:40.134 END TEST json_config_extra_key 00:05:40.134 ************************************ 00:05:40.134 22:32:24 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.134 22:32:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.134 22:32:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.134 22:32:24 -- common/autotest_common.sh@10 -- # set +x 00:05:40.134 ************************************ 00:05:40.134 START TEST alias_rpc 00:05:40.134 ************************************ 00:05:40.134 22:32:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.134 * Looking for test storage... 00:05:40.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:40.134 22:32:24 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:40.134 22:32:24 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=893851 00:05:40.134 22:32:24 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 893851 00:05:40.134 22:32:24 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.134 22:32:24 -- common/autotest_common.sh@819 -- # '[' -z 893851 ']' 00:05:40.134 22:32:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.134 22:32:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:40.134 22:32:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.134 22:32:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:40.134 22:32:24 -- common/autotest_common.sh@10 -- # set +x 00:05:40.134 [2024-04-15 22:32:24.890177] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:05:40.134 [2024-04-15 22:32:24.890256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid893851 ] 00:05:40.134 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.397 [2024-04-15 22:32:24.964417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.397 [2024-04-15 22:32:25.035450] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:40.397 [2024-04-15 22:32:25.035594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.969 22:32:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:40.969 22:32:25 -- common/autotest_common.sh@852 -- # return 0 00:05:40.969 22:32:25 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:41.229 22:32:25 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 893851 00:05:41.229 22:32:25 -- common/autotest_common.sh@926 -- # '[' -z 893851 ']' 00:05:41.229 22:32:25 -- common/autotest_common.sh@930 -- # kill -0 893851 00:05:41.229 22:32:25 -- common/autotest_common.sh@931 -- # uname 00:05:41.229 22:32:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:41.229 22:32:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 893851 00:05:41.229 22:32:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:41.229 22:32:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:41.229 22:32:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 893851' 00:05:41.229 killing process with pid 893851 00:05:41.229 22:32:25 -- common/autotest_common.sh@945 -- # kill 893851 00:05:41.229 22:32:25 -- common/autotest_common.sh@950 -- # wait 893851 00:05:41.491 00:05:41.491 real 0m1.341s 00:05:41.491 user 0m1.459s 00:05:41.491 sys 0m0.366s 00:05:41.491 22:32:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.491 22:32:26 -- common/autotest_common.sh@10 -- # set +x 00:05:41.491 ************************************ 00:05:41.491 END TEST alias_rpc 00:05:41.491 ************************************ 00:05:41.491 22:32:26 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:41.491 22:32:26 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:41.491 22:32:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.491 22:32:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.491 22:32:26 -- common/autotest_common.sh@10 -- # set +x 00:05:41.491 ************************************ 00:05:41.491 START TEST spdkcli_tcp 00:05:41.491 ************************************ 00:05:41.491 22:32:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:41.491 * Looking for test storage... 00:05:41.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:41.491 22:32:26 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:41.491 22:32:26 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:41.491 22:32:26 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:41.491 22:32:26 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:41.491 22:32:26 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:41.491 22:32:26 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:41.491 22:32:26 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:41.491 22:32:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:41.491 22:32:26 -- common/autotest_common.sh@10 -- # set +x 00:05:41.491 22:32:26 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=894235 00:05:41.491 22:32:26 -- spdkcli/tcp.sh@27 -- # waitforlisten 894235 00:05:41.491 22:32:26 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:41.491 22:32:26 -- common/autotest_common.sh@819 -- # '[' -z 894235 ']' 00:05:41.491 22:32:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.491 22:32:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:41.491 22:32:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.491 22:32:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:41.491 22:32:26 -- common/autotest_common.sh@10 -- # set +x 00:05:41.491 [2024-04-15 22:32:26.283110] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:05:41.491 [2024-04-15 22:32:26.283169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894235 ] 00:05:41.753 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.753 [2024-04-15 22:32:26.350586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.753 [2024-04-15 22:32:26.413279] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:41.753 [2024-04-15 22:32:26.413502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.753 [2024-04-15 22:32:26.413507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.325 22:32:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:42.325 22:32:27 -- common/autotest_common.sh@852 -- # return 0 00:05:42.325 22:32:27 -- spdkcli/tcp.sh@31 -- # socat_pid=894423 00:05:42.325 22:32:27 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:42.325 22:32:27 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:42.586 [ 00:05:42.586 "bdev_malloc_delete", 00:05:42.586 "bdev_malloc_create", 00:05:42.586 "bdev_null_resize", 00:05:42.586 "bdev_null_delete", 00:05:42.586 "bdev_null_create", 00:05:42.586 "bdev_nvme_cuse_unregister", 00:05:42.586 "bdev_nvme_cuse_register", 00:05:42.586 "bdev_opal_new_user", 00:05:42.586 "bdev_opal_set_lock_state", 00:05:42.586 "bdev_opal_delete", 00:05:42.586 "bdev_opal_get_info", 00:05:42.586 "bdev_opal_create", 00:05:42.586 "bdev_nvme_opal_revert", 00:05:42.586 "bdev_nvme_opal_init", 00:05:42.586 "bdev_nvme_send_cmd", 00:05:42.586 "bdev_nvme_get_path_iostat", 00:05:42.586 "bdev_nvme_get_mdns_discovery_info", 00:05:42.586 "bdev_nvme_stop_mdns_discovery", 00:05:42.586 "bdev_nvme_start_mdns_discovery", 00:05:42.586 "bdev_nvme_set_multipath_policy", 00:05:42.586 "bdev_nvme_set_preferred_path", 00:05:42.586 "bdev_nvme_get_io_paths", 00:05:42.586 "bdev_nvme_remove_error_injection", 00:05:42.586 "bdev_nvme_add_error_injection", 00:05:42.586 "bdev_nvme_get_discovery_info", 00:05:42.586 "bdev_nvme_stop_discovery", 00:05:42.586 "bdev_nvme_start_discovery", 00:05:42.586 "bdev_nvme_get_controller_health_info", 00:05:42.586 "bdev_nvme_disable_controller", 00:05:42.586 "bdev_nvme_enable_controller", 00:05:42.586 "bdev_nvme_reset_controller", 00:05:42.586 "bdev_nvme_get_transport_statistics", 00:05:42.586 "bdev_nvme_apply_firmware", 00:05:42.586 "bdev_nvme_detach_controller", 00:05:42.586 "bdev_nvme_get_controllers", 00:05:42.586 "bdev_nvme_attach_controller", 00:05:42.586 "bdev_nvme_set_hotplug", 00:05:42.586 "bdev_nvme_set_options", 00:05:42.586 "bdev_passthru_delete", 00:05:42.586 "bdev_passthru_create", 00:05:42.586 "bdev_lvol_grow_lvstore", 00:05:42.586 "bdev_lvol_get_lvols", 00:05:42.586 "bdev_lvol_get_lvstores", 00:05:42.586 "bdev_lvol_delete", 00:05:42.586 "bdev_lvol_set_read_only", 00:05:42.586 "bdev_lvol_resize", 00:05:42.586 "bdev_lvol_decouple_parent", 00:05:42.586 "bdev_lvol_inflate", 00:05:42.586 "bdev_lvol_rename", 00:05:42.586 "bdev_lvol_clone_bdev", 00:05:42.586 "bdev_lvol_clone", 00:05:42.586 "bdev_lvol_snapshot", 00:05:42.586 "bdev_lvol_create", 00:05:42.586 "bdev_lvol_delete_lvstore", 00:05:42.586 "bdev_lvol_rename_lvstore", 00:05:42.586 "bdev_lvol_create_lvstore", 00:05:42.586 "bdev_raid_set_options", 00:05:42.586 "bdev_raid_remove_base_bdev", 00:05:42.586 "bdev_raid_add_base_bdev", 00:05:42.586 "bdev_raid_delete", 00:05:42.586 "bdev_raid_create", 00:05:42.586 "bdev_raid_get_bdevs", 00:05:42.586 "bdev_error_inject_error", 00:05:42.586 "bdev_error_delete", 00:05:42.586 "bdev_error_create", 00:05:42.586 "bdev_split_delete", 00:05:42.586 "bdev_split_create", 00:05:42.586 "bdev_delay_delete", 00:05:42.586 "bdev_delay_create", 00:05:42.586 "bdev_delay_update_latency", 00:05:42.586 "bdev_zone_block_delete", 00:05:42.586 "bdev_zone_block_create", 00:05:42.586 "blobfs_create", 00:05:42.586 "blobfs_detect", 00:05:42.586 "blobfs_set_cache_size", 00:05:42.586 "bdev_aio_delete", 00:05:42.586 "bdev_aio_rescan", 00:05:42.586 "bdev_aio_create", 00:05:42.586 "bdev_ftl_set_property", 00:05:42.586 "bdev_ftl_get_properties", 00:05:42.586 "bdev_ftl_get_stats", 00:05:42.586 "bdev_ftl_unmap", 00:05:42.586 "bdev_ftl_unload", 00:05:42.586 "bdev_ftl_delete", 00:05:42.586 "bdev_ftl_load", 00:05:42.586 "bdev_ftl_create", 00:05:42.586 "bdev_virtio_attach_controller", 00:05:42.586 "bdev_virtio_scsi_get_devices", 00:05:42.586 "bdev_virtio_detach_controller", 00:05:42.586 "bdev_virtio_blk_set_hotplug", 00:05:42.586 "bdev_iscsi_delete", 00:05:42.586 "bdev_iscsi_create", 00:05:42.586 "bdev_iscsi_set_options", 00:05:42.586 "accel_error_inject_error", 00:05:42.586 "ioat_scan_accel_module", 00:05:42.586 "dsa_scan_accel_module", 00:05:42.587 "iaa_scan_accel_module", 00:05:42.587 "iscsi_set_options", 00:05:42.587 "iscsi_get_auth_groups", 00:05:42.587 "iscsi_auth_group_remove_secret", 00:05:42.587 "iscsi_auth_group_add_secret", 00:05:42.587 "iscsi_delete_auth_group", 00:05:42.587 "iscsi_create_auth_group", 00:05:42.587 "iscsi_set_discovery_auth", 00:05:42.587 "iscsi_get_options", 00:05:42.587 "iscsi_target_node_request_logout", 00:05:42.587 "iscsi_target_node_set_redirect", 00:05:42.587 "iscsi_target_node_set_auth", 00:05:42.587 "iscsi_target_node_add_lun", 00:05:42.587 "iscsi_get_connections", 00:05:42.587 "iscsi_portal_group_set_auth", 00:05:42.587 "iscsi_start_portal_group", 00:05:42.587 "iscsi_delete_portal_group", 00:05:42.587 "iscsi_create_portal_group", 00:05:42.587 "iscsi_get_portal_groups", 00:05:42.587 "iscsi_delete_target_node", 00:05:42.587 "iscsi_target_node_remove_pg_ig_maps", 00:05:42.587 "iscsi_target_node_add_pg_ig_maps", 00:05:42.587 "iscsi_create_target_node", 00:05:42.587 "iscsi_get_target_nodes", 00:05:42.587 "iscsi_delete_initiator_group", 00:05:42.587 "iscsi_initiator_group_remove_initiators", 00:05:42.587 "iscsi_initiator_group_add_initiators", 00:05:42.587 "iscsi_create_initiator_group", 00:05:42.587 "iscsi_get_initiator_groups", 00:05:42.587 "nvmf_set_crdt", 00:05:42.587 "nvmf_set_config", 00:05:42.587 "nvmf_set_max_subsystems", 00:05:42.587 "nvmf_subsystem_get_listeners", 00:05:42.587 "nvmf_subsystem_get_qpairs", 00:05:42.587 "nvmf_subsystem_get_controllers", 00:05:42.587 "nvmf_get_stats", 00:05:42.587 "nvmf_get_transports", 00:05:42.587 "nvmf_create_transport", 00:05:42.587 "nvmf_get_targets", 00:05:42.587 "nvmf_delete_target", 00:05:42.587 "nvmf_create_target", 00:05:42.587 "nvmf_subsystem_allow_any_host", 00:05:42.587 "nvmf_subsystem_remove_host", 00:05:42.587 "nvmf_subsystem_add_host", 00:05:42.587 "nvmf_subsystem_remove_ns", 00:05:42.587 "nvmf_subsystem_add_ns", 00:05:42.587 "nvmf_subsystem_listener_set_ana_state", 00:05:42.587 "nvmf_discovery_get_referrals", 00:05:42.587 "nvmf_discovery_remove_referral", 00:05:42.587 "nvmf_discovery_add_referral", 00:05:42.587 "nvmf_subsystem_remove_listener", 00:05:42.587 "nvmf_subsystem_add_listener", 00:05:42.587 "nvmf_delete_subsystem", 00:05:42.587 "nvmf_create_subsystem", 00:05:42.587 "nvmf_get_subsystems", 00:05:42.587 "env_dpdk_get_mem_stats", 00:05:42.587 "nbd_get_disks", 00:05:42.587 "nbd_stop_disk", 00:05:42.587 "nbd_start_disk", 00:05:42.587 "ublk_recover_disk", 00:05:42.587 "ublk_get_disks", 00:05:42.587 "ublk_stop_disk", 00:05:42.587 "ublk_start_disk", 00:05:42.587 "ublk_destroy_target", 00:05:42.587 "ublk_create_target", 00:05:42.587 "virtio_blk_create_transport", 00:05:42.587 "virtio_blk_get_transports", 00:05:42.587 "vhost_controller_set_coalescing", 00:05:42.587 "vhost_get_controllers", 00:05:42.587 "vhost_delete_controller", 00:05:42.587 "vhost_create_blk_controller", 00:05:42.587 "vhost_scsi_controller_remove_target", 00:05:42.587 "vhost_scsi_controller_add_target", 00:05:42.587 "vhost_start_scsi_controller", 00:05:42.587 "vhost_create_scsi_controller", 00:05:42.587 "thread_set_cpumask", 00:05:42.587 "framework_get_scheduler", 00:05:42.587 "framework_set_scheduler", 00:05:42.587 "framework_get_reactors", 00:05:42.587 "thread_get_io_channels", 00:05:42.587 "thread_get_pollers", 00:05:42.587 "thread_get_stats", 00:05:42.587 "framework_monitor_context_switch", 00:05:42.587 "spdk_kill_instance", 00:05:42.587 "log_enable_timestamps", 00:05:42.587 "log_get_flags", 00:05:42.587 "log_clear_flag", 00:05:42.587 "log_set_flag", 00:05:42.587 "log_get_level", 00:05:42.587 "log_set_level", 00:05:42.587 "log_get_print_level", 00:05:42.587 "log_set_print_level", 00:05:42.587 "framework_enable_cpumask_locks", 00:05:42.587 "framework_disable_cpumask_locks", 00:05:42.587 "framework_wait_init", 00:05:42.587 "framework_start_init", 00:05:42.587 "scsi_get_devices", 00:05:42.587 "bdev_get_histogram", 00:05:42.587 "bdev_enable_histogram", 00:05:42.587 "bdev_set_qos_limit", 00:05:42.587 "bdev_set_qd_sampling_period", 00:05:42.587 "bdev_get_bdevs", 00:05:42.587 "bdev_reset_iostat", 00:05:42.587 "bdev_get_iostat", 00:05:42.587 "bdev_examine", 00:05:42.587 "bdev_wait_for_examine", 00:05:42.587 "bdev_set_options", 00:05:42.587 "notify_get_notifications", 00:05:42.587 "notify_get_types", 00:05:42.587 "accel_get_stats", 00:05:42.587 "accel_set_options", 00:05:42.587 "accel_set_driver", 00:05:42.587 "accel_crypto_key_destroy", 00:05:42.587 "accel_crypto_keys_get", 00:05:42.587 "accel_crypto_key_create", 00:05:42.587 "accel_assign_opc", 00:05:42.587 "accel_get_module_info", 00:05:42.587 "accel_get_opc_assignments", 00:05:42.587 "vmd_rescan", 00:05:42.587 "vmd_remove_device", 00:05:42.587 "vmd_enable", 00:05:42.587 "sock_set_default_impl", 00:05:42.587 "sock_impl_set_options", 00:05:42.587 "sock_impl_get_options", 00:05:42.587 "iobuf_get_stats", 00:05:42.587 "iobuf_set_options", 00:05:42.587 "framework_get_pci_devices", 00:05:42.587 "framework_get_config", 00:05:42.587 "framework_get_subsystems", 00:05:42.587 "trace_get_info", 00:05:42.587 "trace_get_tpoint_group_mask", 00:05:42.587 "trace_disable_tpoint_group", 00:05:42.587 "trace_enable_tpoint_group", 00:05:42.587 "trace_clear_tpoint_mask", 00:05:42.587 "trace_set_tpoint_mask", 00:05:42.587 "spdk_get_version", 00:05:42.587 "rpc_get_methods" 00:05:42.587 ] 00:05:42.587 22:32:27 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:42.587 22:32:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:42.587 22:32:27 -- common/autotest_common.sh@10 -- # set +x 00:05:42.587 22:32:27 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:42.587 22:32:27 -- spdkcli/tcp.sh@38 -- # killprocess 894235 00:05:42.587 22:32:27 -- common/autotest_common.sh@926 -- # '[' -z 894235 ']' 00:05:42.587 22:32:27 -- common/autotest_common.sh@930 -- # kill -0 894235 00:05:42.587 22:32:27 -- common/autotest_common.sh@931 -- # uname 00:05:42.587 22:32:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:42.587 22:32:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 894235 00:05:42.587 22:32:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:42.587 22:32:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:42.587 22:32:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 894235' 00:05:42.587 killing process with pid 894235 00:05:42.587 22:32:27 -- common/autotest_common.sh@945 -- # kill 894235 00:05:42.587 22:32:27 -- common/autotest_common.sh@950 -- # wait 894235 00:05:42.849 00:05:42.849 real 0m1.375s 00:05:42.849 user 0m2.528s 00:05:42.849 sys 0m0.408s 00:05:42.849 22:32:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.849 22:32:27 -- common/autotest_common.sh@10 -- # set +x 00:05:42.849 ************************************ 00:05:42.849 END TEST spdkcli_tcp 00:05:42.849 ************************************ 00:05:42.849 22:32:27 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:42.849 22:32:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.849 22:32:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.849 22:32:27 -- common/autotest_common.sh@10 -- # set +x 00:05:42.849 ************************************ 00:05:42.849 START TEST dpdk_mem_utility 00:05:42.849 ************************************ 00:05:42.849 22:32:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:42.849 * Looking for test storage... 00:05:42.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:42.849 22:32:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:42.849 22:32:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=894636 00:05:42.849 22:32:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 894636 00:05:42.849 22:32:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.849 22:32:27 -- common/autotest_common.sh@819 -- # '[' -z 894636 ']' 00:05:42.849 22:32:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.849 22:32:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:42.849 22:32:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.849 22:32:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:42.849 22:32:27 -- common/autotest_common.sh@10 -- # set +x 00:05:43.109 [2024-04-15 22:32:27.687243] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:05:43.109 [2024-04-15 22:32:27.687319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894636 ] 00:05:43.109 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.109 [2024-04-15 22:32:27.756477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.109 [2024-04-15 22:32:27.819011] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.109 [2024-04-15 22:32:27.819149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.682 22:32:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:43.682 22:32:28 -- common/autotest_common.sh@852 -- # return 0 00:05:43.682 22:32:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:43.682 22:32:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:43.682 22:32:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.682 22:32:28 -- common/autotest_common.sh@10 -- # set +x 00:05:43.682 { 00:05:43.682 "filename": "/tmp/spdk_mem_dump.txt" 00:05:43.682 } 00:05:43.682 22:32:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.682 22:32:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:43.968 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:43.968 1 heaps totaling size 814.000000 MiB 00:05:43.968 size: 814.000000 MiB heap id: 0 00:05:43.968 end heaps---------- 00:05:43.968 8 mempools totaling size 598.116089 MiB 00:05:43.968 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:43.968 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:43.968 size: 84.521057 MiB name: bdev_io_894636 00:05:43.968 size: 51.011292 MiB name: evtpool_894636 00:05:43.968 size: 50.003479 MiB name: msgpool_894636 00:05:43.968 size: 21.763794 MiB name: PDU_Pool 00:05:43.968 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:43.968 size: 0.026123 MiB name: Session_Pool 00:05:43.968 end mempools------- 00:05:43.968 6 memzones totaling size 4.142822 MiB 00:05:43.968 size: 1.000366 MiB name: RG_ring_0_894636 00:05:43.968 size: 1.000366 MiB name: RG_ring_1_894636 00:05:43.968 size: 1.000366 MiB name: RG_ring_4_894636 00:05:43.968 size: 1.000366 MiB name: RG_ring_5_894636 00:05:43.968 size: 0.125366 MiB name: RG_ring_2_894636 00:05:43.968 size: 0.015991 MiB name: RG_ring_3_894636 00:05:43.968 end memzones------- 00:05:43.968 22:32:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:43.968 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:43.968 list of free elements. size: 12.519348 MiB 00:05:43.968 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:43.968 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:43.968 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:43.968 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:43.968 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:43.968 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:43.968 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:43.968 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:43.968 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:43.968 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:43.968 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:43.968 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:43.968 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:43.968 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:43.968 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:43.968 list of standard malloc elements. size: 199.218079 MiB 00:05:43.968 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:43.968 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:43.968 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:43.968 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:43.968 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:43.968 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:43.968 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:43.968 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:43.968 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:43.968 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:43.968 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:43.968 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:43.968 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:43.968 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:43.968 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:43.968 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:43.968 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:43.968 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:43.968 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:43.968 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:43.968 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:43.968 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:43.968 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:43.968 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:43.968 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:43.968 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:43.968 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:43.968 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:43.968 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:43.968 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:43.968 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:43.968 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:43.968 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:43.968 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:43.968 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:43.968 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:43.968 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:43.968 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:43.968 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:43.968 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:43.968 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:43.968 list of memzone associated elements. size: 602.262573 MiB 00:05:43.968 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:43.968 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:43.968 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:43.968 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:43.968 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:43.968 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_894636_0 00:05:43.968 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:43.968 associated memzone info: size: 48.002930 MiB name: MP_evtpool_894636_0 00:05:43.968 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:43.968 associated memzone info: size: 48.002930 MiB name: MP_msgpool_894636_0 00:05:43.968 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:43.968 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:43.968 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:43.968 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:43.968 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:43.968 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_894636 00:05:43.968 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:43.968 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_894636 00:05:43.968 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:43.968 associated memzone info: size: 1.007996 MiB name: MP_evtpool_894636 00:05:43.968 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:43.968 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:43.968 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:43.968 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:43.968 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:43.969 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:43.969 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:43.969 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:43.969 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:43.969 associated memzone info: size: 1.000366 MiB name: RG_ring_0_894636 00:05:43.969 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:43.969 associated memzone info: size: 1.000366 MiB name: RG_ring_1_894636 00:05:43.969 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:43.969 associated memzone info: size: 1.000366 MiB name: RG_ring_4_894636 00:05:43.969 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:43.969 associated memzone info: size: 1.000366 MiB name: RG_ring_5_894636 00:05:43.969 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:43.969 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_894636 00:05:43.969 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:43.969 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:43.969 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:43.969 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:43.969 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:43.969 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:43.969 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:43.969 associated memzone info: size: 0.125366 MiB name: RG_ring_2_894636 00:05:43.969 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:43.969 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:43.969 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:43.969 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:43.969 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:43.969 associated memzone info: size: 0.015991 MiB name: RG_ring_3_894636 00:05:43.969 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:43.969 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:43.969 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:43.969 associated memzone info: size: 0.000183 MiB name: MP_msgpool_894636 00:05:43.969 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:43.969 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_894636 00:05:43.969 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:43.969 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:43.969 22:32:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:43.969 22:32:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 894636 00:05:43.969 22:32:28 -- common/autotest_common.sh@926 -- # '[' -z 894636 ']' 00:05:43.969 22:32:28 -- common/autotest_common.sh@930 -- # kill -0 894636 00:05:43.969 22:32:28 -- common/autotest_common.sh@931 -- # uname 00:05:43.969 22:32:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:43.969 22:32:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 894636 00:05:43.969 22:32:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:43.969 22:32:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:43.969 22:32:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 894636' 00:05:43.969 killing process with pid 894636 00:05:43.969 22:32:28 -- common/autotest_common.sh@945 -- # kill 894636 00:05:43.969 22:32:28 -- common/autotest_common.sh@950 -- # wait 894636 00:05:44.231 00:05:44.231 real 0m1.297s 00:05:44.231 user 0m1.415s 00:05:44.231 sys 0m0.343s 00:05:44.231 22:32:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.231 22:32:28 -- common/autotest_common.sh@10 -- # set +x 00:05:44.231 ************************************ 00:05:44.231 END TEST dpdk_mem_utility 00:05:44.231 ************************************ 00:05:44.231 22:32:28 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:44.231 22:32:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.231 22:32:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.231 22:32:28 -- common/autotest_common.sh@10 -- # set +x 00:05:44.231 ************************************ 00:05:44.231 START TEST event 00:05:44.231 ************************************ 00:05:44.231 22:32:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:44.231 * Looking for test storage... 00:05:44.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:44.231 22:32:28 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:44.231 22:32:28 -- bdev/nbd_common.sh@6 -- # set -e 00:05:44.231 22:32:28 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:44.231 22:32:28 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:44.231 22:32:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.231 22:32:28 -- common/autotest_common.sh@10 -- # set +x 00:05:44.231 ************************************ 00:05:44.231 START TEST event_perf 00:05:44.231 ************************************ 00:05:44.231 22:32:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:44.231 Running I/O for 1 seconds...[2024-04-15 22:32:29.008908] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:05:44.231 [2024-04-15 22:32:29.009008] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid895031 ] 00:05:44.493 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.493 [2024-04-15 22:32:29.083352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.493 [2024-04-15 22:32:29.156767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.493 [2024-04-15 22:32:29.156903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.493 [2024-04-15 22:32:29.157060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.493 [2024-04-15 22:32:29.157060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.437 Running I/O for 1 seconds... 00:05:45.437 lcore 0: 170194 00:05:45.437 lcore 1: 170191 00:05:45.437 lcore 2: 170190 00:05:45.437 lcore 3: 170193 00:05:45.437 done. 00:05:45.437 00:05:45.437 real 0m1.223s 00:05:45.437 user 0m4.146s 00:05:45.437 sys 0m0.076s 00:05:45.437 22:32:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.437 22:32:30 -- common/autotest_common.sh@10 -- # set +x 00:05:45.437 ************************************ 00:05:45.437 END TEST event_perf 00:05:45.437 ************************************ 00:05:45.698 22:32:30 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:45.698 22:32:30 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:45.698 22:32:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.698 22:32:30 -- common/autotest_common.sh@10 -- # set +x 00:05:45.698 ************************************ 00:05:45.698 START TEST event_reactor 00:05:45.698 ************************************ 00:05:45.698 22:32:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:45.698 [2024-04-15 22:32:30.276934] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:05:45.698 [2024-04-15 22:32:30.277025] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid895185 ] 00:05:45.698 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.698 [2024-04-15 22:32:30.346652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.698 [2024-04-15 22:32:30.409524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.081 test_start 00:05:47.081 oneshot 00:05:47.081 tick 100 00:05:47.081 tick 100 00:05:47.081 tick 250 00:05:47.081 tick 100 00:05:47.081 tick 100 00:05:47.081 tick 100 00:05:47.081 tick 250 00:05:47.081 tick 500 00:05:47.081 tick 100 00:05:47.081 tick 100 00:05:47.081 tick 250 00:05:47.081 tick 100 00:05:47.081 tick 100 00:05:47.081 test_end 00:05:47.081 00:05:47.081 real 0m1.205s 00:05:47.081 user 0m1.122s 00:05:47.081 sys 0m0.079s 00:05:47.081 22:32:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.081 22:32:31 -- common/autotest_common.sh@10 -- # set +x 00:05:47.081 ************************************ 00:05:47.081 END TEST event_reactor 00:05:47.081 ************************************ 00:05:47.081 22:32:31 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:47.081 22:32:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:47.081 22:32:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.081 22:32:31 -- common/autotest_common.sh@10 -- # set +x 00:05:47.081 ************************************ 00:05:47.081 START TEST event_reactor_perf 00:05:47.081 ************************************ 00:05:47.081 22:32:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:47.081 [2024-04-15 22:32:31.526045] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:05:47.081 [2024-04-15 22:32:31.526154] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid895420 ] 00:05:47.081 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.081 [2024-04-15 22:32:31.596374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.081 [2024-04-15 22:32:31.658008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.023 test_start 00:05:48.023 test_end 00:05:48.023 Performance: 363096 events per second 00:05:48.023 00:05:48.023 real 0m1.204s 00:05:48.023 user 0m1.122s 00:05:48.023 sys 0m0.078s 00:05:48.023 22:32:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.023 22:32:32 -- common/autotest_common.sh@10 -- # set +x 00:05:48.023 ************************************ 00:05:48.023 END TEST event_reactor_perf 00:05:48.023 ************************************ 00:05:48.023 22:32:32 -- event/event.sh@49 -- # uname -s 00:05:48.023 22:32:32 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:48.023 22:32:32 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:48.023 22:32:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.023 22:32:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.023 22:32:32 -- common/autotest_common.sh@10 -- # set +x 00:05:48.023 ************************************ 00:05:48.023 START TEST event_scheduler 00:05:48.023 ************************************ 00:05:48.023 22:32:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:48.284 * Looking for test storage... 00:05:48.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:48.284 22:32:32 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:48.284 22:32:32 -- scheduler/scheduler.sh@35 -- # scheduler_pid=895804 00:05:48.284 22:32:32 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.284 22:32:32 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:48.284 22:32:32 -- scheduler/scheduler.sh@37 -- # waitforlisten 895804 00:05:48.284 22:32:32 -- common/autotest_common.sh@819 -- # '[' -z 895804 ']' 00:05:48.284 22:32:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.284 22:32:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:48.284 22:32:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.284 22:32:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:48.284 22:32:32 -- common/autotest_common.sh@10 -- # set +x 00:05:48.284 [2024-04-15 22:32:32.895845] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:05:48.284 [2024-04-15 22:32:32.895909] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid895804 ] 00:05:48.284 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.284 [2024-04-15 22:32:32.954369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.284 [2024-04-15 22:32:33.013988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.284 [2024-04-15 22:32:33.014130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.284 [2024-04-15 22:32:33.014289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.284 [2024-04-15 22:32:33.014291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.226 22:32:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:49.226 22:32:33 -- common/autotest_common.sh@852 -- # return 0 00:05:49.226 22:32:33 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:49.226 22:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.226 22:32:33 -- common/autotest_common.sh@10 -- # set +x 00:05:49.226 POWER: Env isn't set yet! 00:05:49.226 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:49.226 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:49.226 POWER: Cannot set governor of lcore 0 to userspace 00:05:49.226 POWER: Attempting to initialise PSTAT power management... 00:05:49.226 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:49.226 POWER: Initialized successfully for lcore 0 power management 00:05:49.226 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:49.226 POWER: Initialized successfully for lcore 1 power management 00:05:49.226 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:49.226 POWER: Initialized successfully for lcore 2 power management 00:05:49.226 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:49.226 POWER: Initialized successfully for lcore 3 power management 00:05:49.226 22:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.226 22:32:33 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:49.226 22:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.226 22:32:33 -- common/autotest_common.sh@10 -- # set +x 00:05:49.226 [2024-04-15 22:32:33.784697] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:49.226 22:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.226 22:32:33 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:49.226 22:32:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.226 22:32:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.226 22:32:33 -- common/autotest_common.sh@10 -- # set +x 00:05:49.226 ************************************ 00:05:49.226 START TEST scheduler_create_thread 00:05:49.226 ************************************ 00:05:49.226 22:32:33 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:49.226 22:32:33 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:49.226 22:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.226 22:32:33 -- common/autotest_common.sh@10 -- # set +x 00:05:49.226 2 00:05:49.226 22:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.226 22:32:33 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:49.226 22:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.226 22:32:33 -- common/autotest_common.sh@10 -- # set +x 00:05:49.226 3 00:05:49.226 22:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.226 22:32:33 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:49.226 22:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.226 22:32:33 -- common/autotest_common.sh@10 -- # set +x 00:05:49.226 4 00:05:49.226 22:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.226 22:32:33 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:49.226 22:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.226 22:32:33 -- common/autotest_common.sh@10 -- # set +x 00:05:49.226 5 00:05:49.226 22:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.226 22:32:33 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:49.226 22:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.226 22:32:33 -- common/autotest_common.sh@10 -- # set +x 00:05:49.226 6 00:05:49.226 22:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.226 22:32:33 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:49.226 22:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.226 22:32:33 -- common/autotest_common.sh@10 -- # set +x 00:05:49.226 7 00:05:49.226 22:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.226 22:32:33 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:49.226 22:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.226 22:32:33 -- common/autotest_common.sh@10 -- # set +x 00:05:49.226 8 00:05:49.226 22:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.226 22:32:33 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:49.226 22:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.226 22:32:33 -- common/autotest_common.sh@10 -- # set +x 00:05:50.612 9 00:05:50.612 22:32:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:50.612 22:32:35 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:50.612 22:32:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:50.612 22:32:35 -- common/autotest_common.sh@10 -- # set +x 00:05:51.553 10 00:05:51.553 22:32:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:51.553 22:32:36 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:51.553 22:32:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:51.553 22:32:36 -- common/autotest_common.sh@10 -- # set +x 00:05:52.494 22:32:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.495 22:32:37 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:52.495 22:32:37 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:52.495 22:32:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.495 22:32:37 -- common/autotest_common.sh@10 -- # set +x 00:05:53.065 22:32:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:53.065 22:32:37 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:53.065 22:32:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:53.065 22:32:37 -- common/autotest_common.sh@10 -- # set +x 00:05:53.636 22:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:53.636 22:32:38 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:53.636 22:32:38 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:53.636 22:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:53.636 22:32:38 -- common/autotest_common.sh@10 -- # set +x 00:05:54.206 22:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.206 00:05:54.206 real 0m5.068s 00:05:54.206 user 0m0.025s 00:05:54.206 sys 0m0.006s 00:05:54.206 22:32:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.206 22:32:38 -- common/autotest_common.sh@10 -- # set +x 00:05:54.206 ************************************ 00:05:54.206 END TEST scheduler_create_thread 00:05:54.206 ************************************ 00:05:54.206 22:32:38 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:54.206 22:32:38 -- scheduler/scheduler.sh@46 -- # killprocess 895804 00:05:54.206 22:32:38 -- common/autotest_common.sh@926 -- # '[' -z 895804 ']' 00:05:54.206 22:32:38 -- common/autotest_common.sh@930 -- # kill -0 895804 00:05:54.206 22:32:38 -- common/autotest_common.sh@931 -- # uname 00:05:54.206 22:32:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:54.207 22:32:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 895804 00:05:54.207 22:32:38 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:54.207 22:32:38 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:54.207 22:32:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 895804' 00:05:54.207 killing process with pid 895804 00:05:54.207 22:32:38 -- common/autotest_common.sh@945 -- # kill 895804 00:05:54.207 22:32:38 -- common/autotest_common.sh@950 -- # wait 895804 00:05:54.207 [2024-04-15 22:32:38.991253] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:54.467 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:54.467 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:54.467 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:54.467 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:54.467 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:54.467 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:54.467 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:54.467 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:54.467 00:05:54.467 real 0m6.417s 00:05:54.467 user 0m15.648s 00:05:54.467 sys 0m0.336s 00:05:54.467 22:32:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.467 22:32:39 -- common/autotest_common.sh@10 -- # set +x 00:05:54.467 ************************************ 00:05:54.467 END TEST event_scheduler 00:05:54.467 ************************************ 00:05:54.467 22:32:39 -- event/event.sh@51 -- # modprobe -n nbd 00:05:54.467 22:32:39 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:54.467 22:32:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:54.467 22:32:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.467 22:32:39 -- common/autotest_common.sh@10 -- # set +x 00:05:54.467 ************************************ 00:05:54.467 START TEST app_repeat 00:05:54.467 ************************************ 00:05:54.467 22:32:39 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:54.467 22:32:39 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.467 22:32:39 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.467 22:32:39 -- event/event.sh@13 -- # local nbd_list 00:05:54.467 22:32:39 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.467 22:32:39 -- event/event.sh@14 -- # local bdev_list 00:05:54.467 22:32:39 -- event/event.sh@15 -- # local repeat_times=4 00:05:54.467 22:32:39 -- event/event.sh@17 -- # modprobe nbd 00:05:54.467 22:32:39 -- event/event.sh@19 -- # repeat_pid=897204 00:05:54.467 22:32:39 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.467 22:32:39 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:54.467 22:32:39 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 897204' 00:05:54.467 Process app_repeat pid: 897204 00:05:54.467 22:32:39 -- event/event.sh@23 -- # for i in {0..2} 00:05:54.467 22:32:39 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:54.467 spdk_app_start Round 0 00:05:54.467 22:32:39 -- event/event.sh@25 -- # waitforlisten 897204 /var/tmp/spdk-nbd.sock 00:05:54.467 22:32:39 -- common/autotest_common.sh@819 -- # '[' -z 897204 ']' 00:05:54.467 22:32:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.467 22:32:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:54.467 22:32:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.467 22:32:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:54.467 22:32:39 -- common/autotest_common.sh@10 -- # set +x 00:05:54.467 [2024-04-15 22:32:39.259449] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:05:54.468 [2024-04-15 22:32:39.259522] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid897204 ] 00:05:54.729 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.729 [2024-04-15 22:32:39.329334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.729 [2024-04-15 22:32:39.399965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.729 [2024-04-15 22:32:39.399971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.301 22:32:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:55.301 22:32:40 -- common/autotest_common.sh@852 -- # return 0 00:05:55.301 22:32:40 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.563 Malloc0 00:05:55.563 22:32:40 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.825 Malloc1 00:05:55.825 22:32:40 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@12 -- # local i 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.825 /dev/nbd0 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.825 22:32:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:55.825 22:32:40 -- common/autotest_common.sh@857 -- # local i 00:05:55.825 22:32:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:55.825 22:32:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:55.825 22:32:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:55.825 22:32:40 -- common/autotest_common.sh@861 -- # break 00:05:55.825 22:32:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:55.825 22:32:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:55.825 22:32:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.825 1+0 records in 00:05:55.825 1+0 records out 00:05:55.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301586 s, 13.6 MB/s 00:05:55.825 22:32:40 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.825 22:32:40 -- common/autotest_common.sh@874 -- # size=4096 00:05:55.825 22:32:40 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.825 22:32:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:55.825 22:32:40 -- common/autotest_common.sh@877 -- # return 0 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.825 22:32:40 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.086 /dev/nbd1 00:05:56.086 22:32:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.086 22:32:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.086 22:32:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:56.086 22:32:40 -- common/autotest_common.sh@857 -- # local i 00:05:56.086 22:32:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:56.086 22:32:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:56.086 22:32:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:56.086 22:32:40 -- common/autotest_common.sh@861 -- # break 00:05:56.086 22:32:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:56.086 22:32:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:56.086 22:32:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.086 1+0 records in 00:05:56.086 1+0 records out 00:05:56.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271228 s, 15.1 MB/s 00:05:56.086 22:32:40 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.086 22:32:40 -- common/autotest_common.sh@874 -- # size=4096 00:05:56.086 22:32:40 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.086 22:32:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:56.086 22:32:40 -- common/autotest_common.sh@877 -- # return 0 00:05:56.086 22:32:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.086 22:32:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.086 22:32:40 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.086 22:32:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.086 22:32:40 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.348 22:32:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.348 { 00:05:56.348 "nbd_device": "/dev/nbd0", 00:05:56.348 "bdev_name": "Malloc0" 00:05:56.348 }, 00:05:56.348 { 00:05:56.348 "nbd_device": "/dev/nbd1", 00:05:56.348 "bdev_name": "Malloc1" 00:05:56.348 } 00:05:56.348 ]' 00:05:56.348 22:32:40 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.348 { 00:05:56.348 "nbd_device": "/dev/nbd0", 00:05:56.348 "bdev_name": "Malloc0" 00:05:56.348 }, 00:05:56.348 { 00:05:56.348 "nbd_device": "/dev/nbd1", 00:05:56.348 "bdev_name": "Malloc1" 00:05:56.348 } 00:05:56.348 ]' 00:05:56.348 22:32:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.348 22:32:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.348 /dev/nbd1' 00:05:56.348 22:32:40 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.348 /dev/nbd1' 00:05:56.348 22:32:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.348 22:32:40 -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.348 22:32:40 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.348 22:32:40 -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.348 22:32:40 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.348 22:32:40 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.348 22:32:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.348 22:32:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.348 22:32:40 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.348 22:32:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.348 22:32:40 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.348 22:32:40 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.348 256+0 records in 00:05:56.348 256+0 records out 00:05:56.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123483 s, 84.9 MB/s 00:05:56.348 22:32:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.348 22:32:40 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.348 256+0 records in 00:05:56.348 256+0 records out 00:05:56.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161064 s, 65.1 MB/s 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.348 256+0 records in 00:05:56.348 256+0 records out 00:05:56.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173138 s, 60.6 MB/s 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@51 -- # local i 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.348 22:32:41 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@41 -- # break 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@41 -- # break 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.610 22:32:41 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.872 22:32:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.872 22:32:41 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.872 22:32:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.872 22:32:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.872 22:32:41 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.872 22:32:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.872 22:32:41 -- bdev/nbd_common.sh@65 -- # true 00:05:56.872 22:32:41 -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.872 22:32:41 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.872 22:32:41 -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.872 22:32:41 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.872 22:32:41 -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.872 22:32:41 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.133 22:32:41 -- event/event.sh@35 -- # sleep 3 00:05:57.133 [2024-04-15 22:32:41.910325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.394 [2024-04-15 22:32:41.973336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.394 [2024-04-15 22:32:41.973342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.394 [2024-04-15 22:32:42.005040] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.394 [2024-04-15 22:32:42.005072] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.703 22:32:44 -- event/event.sh@23 -- # for i in {0..2} 00:06:00.703 22:32:44 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:00.703 spdk_app_start Round 1 00:06:00.703 22:32:44 -- event/event.sh@25 -- # waitforlisten 897204 /var/tmp/spdk-nbd.sock 00:06:00.703 22:32:44 -- common/autotest_common.sh@819 -- # '[' -z 897204 ']' 00:06:00.703 22:32:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.703 22:32:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:00.703 22:32:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.703 22:32:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:00.703 22:32:44 -- common/autotest_common.sh@10 -- # set +x 00:06:00.703 22:32:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:00.703 22:32:44 -- common/autotest_common.sh@852 -- # return 0 00:06:00.703 22:32:44 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.703 Malloc0 00:06:00.703 22:32:45 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.703 Malloc1 00:06:00.703 22:32:45 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@12 -- # local i 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.703 /dev/nbd0 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.703 22:32:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:00.703 22:32:45 -- common/autotest_common.sh@857 -- # local i 00:06:00.703 22:32:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:00.703 22:32:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:00.703 22:32:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:00.703 22:32:45 -- common/autotest_common.sh@861 -- # break 00:06:00.703 22:32:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:00.703 22:32:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:00.703 22:32:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.703 1+0 records in 00:06:00.703 1+0 records out 00:06:00.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275543 s, 14.9 MB/s 00:06:00.703 22:32:45 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.703 22:32:45 -- common/autotest_common.sh@874 -- # size=4096 00:06:00.703 22:32:45 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.703 22:32:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:00.703 22:32:45 -- common/autotest_common.sh@877 -- # return 0 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.703 22:32:45 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.965 /dev/nbd1 00:06:00.965 22:32:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.965 22:32:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.965 22:32:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:00.965 22:32:45 -- common/autotest_common.sh@857 -- # local i 00:06:00.965 22:32:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:00.965 22:32:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:00.965 22:32:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:00.965 22:32:45 -- common/autotest_common.sh@861 -- # break 00:06:00.965 22:32:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:00.965 22:32:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:00.965 22:32:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.965 1+0 records in 00:06:00.965 1+0 records out 00:06:00.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274265 s, 14.9 MB/s 00:06:00.965 22:32:45 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.965 22:32:45 -- common/autotest_common.sh@874 -- # size=4096 00:06:00.965 22:32:45 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.965 22:32:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:00.965 22:32:45 -- common/autotest_common.sh@877 -- # return 0 00:06:00.965 22:32:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.965 22:32:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.965 22:32:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.965 22:32:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.965 22:32:45 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.227 { 00:06:01.227 "nbd_device": "/dev/nbd0", 00:06:01.227 "bdev_name": "Malloc0" 00:06:01.227 }, 00:06:01.227 { 00:06:01.227 "nbd_device": "/dev/nbd1", 00:06:01.227 "bdev_name": "Malloc1" 00:06:01.227 } 00:06:01.227 ]' 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.227 { 00:06:01.227 "nbd_device": "/dev/nbd0", 00:06:01.227 "bdev_name": "Malloc0" 00:06:01.227 }, 00:06:01.227 { 00:06:01.227 "nbd_device": "/dev/nbd1", 00:06:01.227 "bdev_name": "Malloc1" 00:06:01.227 } 00:06:01.227 ]' 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.227 /dev/nbd1' 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.227 /dev/nbd1' 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.227 256+0 records in 00:06:01.227 256+0 records out 00:06:01.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124935 s, 83.9 MB/s 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.227 256+0 records in 00:06:01.227 256+0 records out 00:06:01.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016401 s, 63.9 MB/s 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.227 256+0 records in 00:06:01.227 256+0 records out 00:06:01.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177183 s, 59.2 MB/s 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@51 -- # local i 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.227 22:32:45 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@41 -- # break 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@41 -- # break 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.489 22:32:46 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.751 22:32:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.751 22:32:46 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.751 22:32:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.751 22:32:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.751 22:32:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.751 22:32:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.751 22:32:46 -- bdev/nbd_common.sh@65 -- # true 00:06:01.751 22:32:46 -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.751 22:32:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.751 22:32:46 -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.751 22:32:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.751 22:32:46 -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.751 22:32:46 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.013 22:32:46 -- event/event.sh@35 -- # sleep 3 00:06:02.013 [2024-04-15 22:32:46.760620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.275 [2024-04-15 22:32:46.822226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.275 [2024-04-15 22:32:46.822232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.275 [2024-04-15 22:32:46.853920] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.275 [2024-04-15 22:32:46.853953] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.871 22:32:49 -- event/event.sh@23 -- # for i in {0..2} 00:06:04.871 22:32:49 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:04.871 spdk_app_start Round 2 00:06:04.871 22:32:49 -- event/event.sh@25 -- # waitforlisten 897204 /var/tmp/spdk-nbd.sock 00:06:04.871 22:32:49 -- common/autotest_common.sh@819 -- # '[' -z 897204 ']' 00:06:04.871 22:32:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.871 22:32:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:04.871 22:32:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.871 22:32:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:04.871 22:32:49 -- common/autotest_common.sh@10 -- # set +x 00:06:05.133 22:32:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:05.133 22:32:49 -- common/autotest_common.sh@852 -- # return 0 00:06:05.133 22:32:49 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.133 Malloc0 00:06:05.133 22:32:49 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.394 Malloc1 00:06:05.394 22:32:50 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.394 22:32:50 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.394 22:32:50 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.394 22:32:50 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.394 22:32:50 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.394 22:32:50 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.394 22:32:50 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.394 22:32:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.394 22:32:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.394 22:32:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.394 22:32:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.394 22:32:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.394 22:32:50 -- bdev/nbd_common.sh@12 -- # local i 00:06:05.394 22:32:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.394 22:32:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.394 22:32:50 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.655 /dev/nbd0 00:06:05.655 22:32:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.655 22:32:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.655 22:32:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:05.655 22:32:50 -- common/autotest_common.sh@857 -- # local i 00:06:05.655 22:32:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:05.655 22:32:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:05.655 22:32:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:05.655 22:32:50 -- common/autotest_common.sh@861 -- # break 00:06:05.655 22:32:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:05.655 22:32:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:05.655 22:32:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.655 1+0 records in 00:06:05.655 1+0 records out 00:06:05.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274022 s, 14.9 MB/s 00:06:05.655 22:32:50 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.655 22:32:50 -- common/autotest_common.sh@874 -- # size=4096 00:06:05.655 22:32:50 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.655 22:32:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:05.655 22:32:50 -- common/autotest_common.sh@877 -- # return 0 00:06:05.655 22:32:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.655 22:32:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.655 22:32:50 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.655 /dev/nbd1 00:06:05.655 22:32:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.655 22:32:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.655 22:32:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:05.655 22:32:50 -- common/autotest_common.sh@857 -- # local i 00:06:05.655 22:32:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:05.655 22:32:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:05.655 22:32:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:05.655 22:32:50 -- common/autotest_common.sh@861 -- # break 00:06:05.655 22:32:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:05.655 22:32:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:05.655 22:32:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.655 1+0 records in 00:06:05.655 1+0 records out 00:06:05.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364944 s, 11.2 MB/s 00:06:05.655 22:32:50 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.917 22:32:50 -- common/autotest_common.sh@874 -- # size=4096 00:06:05.917 22:32:50 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.917 22:32:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:05.917 22:32:50 -- common/autotest_common.sh@877 -- # return 0 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.917 { 00:06:05.917 "nbd_device": "/dev/nbd0", 00:06:05.917 "bdev_name": "Malloc0" 00:06:05.917 }, 00:06:05.917 { 00:06:05.917 "nbd_device": "/dev/nbd1", 00:06:05.917 "bdev_name": "Malloc1" 00:06:05.917 } 00:06:05.917 ]' 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.917 { 00:06:05.917 "nbd_device": "/dev/nbd0", 00:06:05.917 "bdev_name": "Malloc0" 00:06:05.917 }, 00:06:05.917 { 00:06:05.917 "nbd_device": "/dev/nbd1", 00:06:05.917 "bdev_name": "Malloc1" 00:06:05.917 } 00:06:05.917 ]' 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.917 /dev/nbd1' 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.917 /dev/nbd1' 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.917 256+0 records in 00:06:05.917 256+0 records out 00:06:05.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00299765 s, 350 MB/s 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.917 256+0 records in 00:06:05.917 256+0 records out 00:06:05.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160605 s, 65.3 MB/s 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.917 256+0 records in 00:06:05.917 256+0 records out 00:06:05.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169051 s, 62.0 MB/s 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.917 22:32:50 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.178 22:32:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.178 22:32:50 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.178 22:32:50 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.178 22:32:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.178 22:32:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.178 22:32:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.179 22:32:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.179 22:32:50 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.179 22:32:50 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.179 22:32:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.179 22:32:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.179 22:32:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.179 22:32:50 -- bdev/nbd_common.sh@51 -- # local i 00:06:06.179 22:32:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.179 22:32:50 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.179 22:32:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.179 22:32:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.179 22:32:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.179 22:32:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.179 22:32:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.179 22:32:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.179 22:32:50 -- bdev/nbd_common.sh@41 -- # break 00:06:06.179 22:32:50 -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.179 22:32:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.179 22:32:50 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.440 22:32:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.440 22:32:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.440 22:32:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.440 22:32:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.440 22:32:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.440 22:32:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.440 22:32:51 -- bdev/nbd_common.sh@41 -- # break 00:06:06.440 22:32:51 -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.440 22:32:51 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.440 22:32:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.440 22:32:51 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.440 22:32:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.440 22:32:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.440 22:32:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.701 22:32:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.701 22:32:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.701 22:32:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.701 22:32:51 -- bdev/nbd_common.sh@65 -- # true 00:06:06.701 22:32:51 -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.701 22:32:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.701 22:32:51 -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.701 22:32:51 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.701 22:32:51 -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.701 22:32:51 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.701 22:32:51 -- event/event.sh@35 -- # sleep 3 00:06:06.963 [2024-04-15 22:32:51.598190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.963 [2024-04-15 22:32:51.659938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.963 [2024-04-15 22:32:51.659944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.963 [2024-04-15 22:32:51.691655] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.963 [2024-04-15 22:32:51.691691] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.268 22:32:54 -- event/event.sh@38 -- # waitforlisten 897204 /var/tmp/spdk-nbd.sock 00:06:10.268 22:32:54 -- common/autotest_common.sh@819 -- # '[' -z 897204 ']' 00:06:10.268 22:32:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.268 22:32:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:10.268 22:32:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.268 22:32:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:10.268 22:32:54 -- common/autotest_common.sh@10 -- # set +x 00:06:10.268 22:32:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:10.268 22:32:54 -- common/autotest_common.sh@852 -- # return 0 00:06:10.268 22:32:54 -- event/event.sh@39 -- # killprocess 897204 00:06:10.268 22:32:54 -- common/autotest_common.sh@926 -- # '[' -z 897204 ']' 00:06:10.268 22:32:54 -- common/autotest_common.sh@930 -- # kill -0 897204 00:06:10.268 22:32:54 -- common/autotest_common.sh@931 -- # uname 00:06:10.268 22:32:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:10.268 22:32:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 897204 00:06:10.268 22:32:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:10.268 22:32:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:10.268 22:32:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 897204' 00:06:10.268 killing process with pid 897204 00:06:10.268 22:32:54 -- common/autotest_common.sh@945 -- # kill 897204 00:06:10.268 22:32:54 -- common/autotest_common.sh@950 -- # wait 897204 00:06:10.268 spdk_app_start is called in Round 0. 00:06:10.268 Shutdown signal received, stop current app iteration 00:06:10.268 Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 reinitialization... 00:06:10.268 spdk_app_start is called in Round 1. 00:06:10.268 Shutdown signal received, stop current app iteration 00:06:10.268 Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 reinitialization... 00:06:10.268 spdk_app_start is called in Round 2. 00:06:10.268 Shutdown signal received, stop current app iteration 00:06:10.268 Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 reinitialization... 00:06:10.268 spdk_app_start is called in Round 3. 00:06:10.268 Shutdown signal received, stop current app iteration 00:06:10.268 22:32:54 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:10.268 22:32:54 -- event/event.sh@42 -- # return 0 00:06:10.268 00:06:10.268 real 0m15.559s 00:06:10.268 user 0m33.360s 00:06:10.268 sys 0m2.168s 00:06:10.268 22:32:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.268 22:32:54 -- common/autotest_common.sh@10 -- # set +x 00:06:10.268 ************************************ 00:06:10.268 END TEST app_repeat 00:06:10.268 ************************************ 00:06:10.268 22:32:54 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:10.268 22:32:54 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:10.268 22:32:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:10.268 22:32:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.268 22:32:54 -- common/autotest_common.sh@10 -- # set +x 00:06:10.268 ************************************ 00:06:10.268 START TEST cpu_locks 00:06:10.268 ************************************ 00:06:10.268 22:32:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:10.268 * Looking for test storage... 00:06:10.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:10.268 22:32:54 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:10.268 22:32:54 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:10.268 22:32:54 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:10.268 22:32:54 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:10.268 22:32:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:10.268 22:32:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.269 22:32:54 -- common/autotest_common.sh@10 -- # set +x 00:06:10.269 ************************************ 00:06:10.269 START TEST default_locks 00:06:10.269 ************************************ 00:06:10.269 22:32:54 -- common/autotest_common.sh@1104 -- # default_locks 00:06:10.269 22:32:54 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=900502 00:06:10.269 22:32:54 -- event/cpu_locks.sh@47 -- # waitforlisten 900502 00:06:10.269 22:32:54 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.269 22:32:54 -- common/autotest_common.sh@819 -- # '[' -z 900502 ']' 00:06:10.269 22:32:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.269 22:32:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:10.269 22:32:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.269 22:32:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:10.269 22:32:54 -- common/autotest_common.sh@10 -- # set +x 00:06:10.269 [2024-04-15 22:32:54.982781] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:10.269 [2024-04-15 22:32:54.982844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid900502 ] 00:06:10.269 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.269 [2024-04-15 22:32:55.048864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.530 [2024-04-15 22:32:55.111454] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:10.530 [2024-04-15 22:32:55.111591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.102 22:32:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:11.102 22:32:55 -- common/autotest_common.sh@852 -- # return 0 00:06:11.102 22:32:55 -- event/cpu_locks.sh@49 -- # locks_exist 900502 00:06:11.102 22:32:55 -- event/cpu_locks.sh@22 -- # lslocks -p 900502 00:06:11.102 22:32:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.364 lslocks: write error 00:06:11.364 22:32:56 -- event/cpu_locks.sh@50 -- # killprocess 900502 00:06:11.364 22:32:56 -- common/autotest_common.sh@926 -- # '[' -z 900502 ']' 00:06:11.364 22:32:56 -- common/autotest_common.sh@930 -- # kill -0 900502 00:06:11.364 22:32:56 -- common/autotest_common.sh@931 -- # uname 00:06:11.364 22:32:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:11.364 22:32:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 900502 00:06:11.364 22:32:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:11.364 22:32:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:11.364 22:32:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 900502' 00:06:11.364 killing process with pid 900502 00:06:11.364 22:32:56 -- common/autotest_common.sh@945 -- # kill 900502 00:06:11.364 22:32:56 -- common/autotest_common.sh@950 -- # wait 900502 00:06:11.625 22:32:56 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 900502 00:06:11.625 22:32:56 -- common/autotest_common.sh@640 -- # local es=0 00:06:11.625 22:32:56 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 900502 00:06:11.625 22:32:56 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:11.625 22:32:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:11.625 22:32:56 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:11.625 22:32:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:11.625 22:32:56 -- common/autotest_common.sh@643 -- # waitforlisten 900502 00:06:11.625 22:32:56 -- common/autotest_common.sh@819 -- # '[' -z 900502 ']' 00:06:11.625 22:32:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.625 22:32:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:11.625 22:32:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.625 22:32:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:11.625 22:32:56 -- common/autotest_common.sh@10 -- # set +x 00:06:11.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (900502) - No such process 00:06:11.625 ERROR: process (pid: 900502) is no longer running 00:06:11.625 22:32:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:11.625 22:32:56 -- common/autotest_common.sh@852 -- # return 1 00:06:11.625 22:32:56 -- common/autotest_common.sh@643 -- # es=1 00:06:11.625 22:32:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:11.625 22:32:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:11.625 22:32:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:11.625 22:32:56 -- event/cpu_locks.sh@54 -- # no_locks 00:06:11.625 22:32:56 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.625 22:32:56 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.625 22:32:56 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.625 00:06:11.625 real 0m1.441s 00:06:11.625 user 0m1.516s 00:06:11.625 sys 0m0.481s 00:06:11.625 22:32:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.625 22:32:56 -- common/autotest_common.sh@10 -- # set +x 00:06:11.625 ************************************ 00:06:11.625 END TEST default_locks 00:06:11.625 ************************************ 00:06:11.625 22:32:56 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:11.625 22:32:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:11.625 22:32:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.625 22:32:56 -- common/autotest_common.sh@10 -- # set +x 00:06:11.625 ************************************ 00:06:11.625 START TEST default_locks_via_rpc 00:06:11.625 ************************************ 00:06:11.625 22:32:56 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:11.625 22:32:56 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=900858 00:06:11.625 22:32:56 -- event/cpu_locks.sh@63 -- # waitforlisten 900858 00:06:11.625 22:32:56 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.625 22:32:56 -- common/autotest_common.sh@819 -- # '[' -z 900858 ']' 00:06:11.625 22:32:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.625 22:32:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:11.625 22:32:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.625 22:32:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:11.625 22:32:56 -- common/autotest_common.sh@10 -- # set +x 00:06:11.887 [2024-04-15 22:32:56.467356] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:11.887 [2024-04-15 22:32:56.467416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid900858 ] 00:06:11.887 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.887 [2024-04-15 22:32:56.534041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.887 [2024-04-15 22:32:56.601123] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:11.887 [2024-04-15 22:32:56.601256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.459 22:32:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:12.459 22:32:57 -- common/autotest_common.sh@852 -- # return 0 00:06:12.459 22:32:57 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:12.459 22:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.459 22:32:57 -- common/autotest_common.sh@10 -- # set +x 00:06:12.459 22:32:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.459 22:32:57 -- event/cpu_locks.sh@67 -- # no_locks 00:06:12.459 22:32:57 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:12.459 22:32:57 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:12.459 22:32:57 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:12.459 22:32:57 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:12.459 22:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.459 22:32:57 -- common/autotest_common.sh@10 -- # set +x 00:06:12.459 22:32:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.459 22:32:57 -- event/cpu_locks.sh@71 -- # locks_exist 900858 00:06:12.459 22:32:57 -- event/cpu_locks.sh@22 -- # lslocks -p 900858 00:06:12.459 22:32:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.720 22:32:57 -- event/cpu_locks.sh@73 -- # killprocess 900858 00:06:12.720 22:32:57 -- common/autotest_common.sh@926 -- # '[' -z 900858 ']' 00:06:12.720 22:32:57 -- common/autotest_common.sh@930 -- # kill -0 900858 00:06:12.720 22:32:57 -- common/autotest_common.sh@931 -- # uname 00:06:12.980 22:32:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:12.980 22:32:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 900858 00:06:12.980 22:32:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:12.980 22:32:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:12.980 22:32:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 900858' 00:06:12.980 killing process with pid 900858 00:06:12.980 22:32:57 -- common/autotest_common.sh@945 -- # kill 900858 00:06:12.980 22:32:57 -- common/autotest_common.sh@950 -- # wait 900858 00:06:13.242 00:06:13.242 real 0m1.374s 00:06:13.242 user 0m1.450s 00:06:13.242 sys 0m0.460s 00:06:13.242 22:32:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.242 22:32:57 -- common/autotest_common.sh@10 -- # set +x 00:06:13.242 ************************************ 00:06:13.242 END TEST default_locks_via_rpc 00:06:13.242 ************************************ 00:06:13.242 22:32:57 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:13.242 22:32:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.242 22:32:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.242 22:32:57 -- common/autotest_common.sh@10 -- # set +x 00:06:13.242 ************************************ 00:06:13.242 START TEST non_locking_app_on_locked_coremask 00:06:13.242 ************************************ 00:06:13.242 22:32:57 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:13.242 22:32:57 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=901220 00:06:13.242 22:32:57 -- event/cpu_locks.sh@81 -- # waitforlisten 901220 /var/tmp/spdk.sock 00:06:13.242 22:32:57 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.242 22:32:57 -- common/autotest_common.sh@819 -- # '[' -z 901220 ']' 00:06:13.242 22:32:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.242 22:32:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:13.242 22:32:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.242 22:32:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:13.242 22:32:57 -- common/autotest_common.sh@10 -- # set +x 00:06:13.242 [2024-04-15 22:32:57.887506] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:13.243 [2024-04-15 22:32:57.887567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901220 ] 00:06:13.243 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.243 [2024-04-15 22:32:57.952156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.243 [2024-04-15 22:32:58.015332] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:13.243 [2024-04-15 22:32:58.015462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.184 22:32:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:14.184 22:32:58 -- common/autotest_common.sh@852 -- # return 0 00:06:14.184 22:32:58 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=901452 00:06:14.184 22:32:58 -- event/cpu_locks.sh@85 -- # waitforlisten 901452 /var/tmp/spdk2.sock 00:06:14.184 22:32:58 -- common/autotest_common.sh@819 -- # '[' -z 901452 ']' 00:06:14.184 22:32:58 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:14.184 22:32:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.184 22:32:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:14.184 22:32:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.184 22:32:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:14.184 22:32:58 -- common/autotest_common.sh@10 -- # set +x 00:06:14.184 [2024-04-15 22:32:58.688376] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:14.184 [2024-04-15 22:32:58.688428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901452 ] 00:06:14.184 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.184 [2024-04-15 22:32:58.786146] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.184 [2024-04-15 22:32:58.786175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.184 [2024-04-15 22:32:58.913134] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:14.184 [2024-04-15 22:32:58.913262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.757 22:32:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:14.757 22:32:59 -- common/autotest_common.sh@852 -- # return 0 00:06:14.757 22:32:59 -- event/cpu_locks.sh@87 -- # locks_exist 901220 00:06:14.757 22:32:59 -- event/cpu_locks.sh@22 -- # lslocks -p 901220 00:06:14.757 22:32:59 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.018 lslocks: write error 00:06:15.018 22:32:59 -- event/cpu_locks.sh@89 -- # killprocess 901220 00:06:15.018 22:32:59 -- common/autotest_common.sh@926 -- # '[' -z 901220 ']' 00:06:15.018 22:32:59 -- common/autotest_common.sh@930 -- # kill -0 901220 00:06:15.018 22:32:59 -- common/autotest_common.sh@931 -- # uname 00:06:15.018 22:32:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:15.018 22:32:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 901220 00:06:15.018 22:32:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:15.018 22:32:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:15.018 22:32:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 901220' 00:06:15.018 killing process with pid 901220 00:06:15.018 22:32:59 -- common/autotest_common.sh@945 -- # kill 901220 00:06:15.018 22:32:59 -- common/autotest_common.sh@950 -- # wait 901220 00:06:15.590 22:33:00 -- event/cpu_locks.sh@90 -- # killprocess 901452 00:06:15.590 22:33:00 -- common/autotest_common.sh@926 -- # '[' -z 901452 ']' 00:06:15.590 22:33:00 -- common/autotest_common.sh@930 -- # kill -0 901452 00:06:15.590 22:33:00 -- common/autotest_common.sh@931 -- # uname 00:06:15.590 22:33:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:15.590 22:33:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 901452 00:06:15.590 22:33:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:15.590 22:33:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:15.590 22:33:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 901452' 00:06:15.590 killing process with pid 901452 00:06:15.590 22:33:00 -- common/autotest_common.sh@945 -- # kill 901452 00:06:15.590 22:33:00 -- common/autotest_common.sh@950 -- # wait 901452 00:06:15.852 00:06:15.852 real 0m2.597s 00:06:15.852 user 0m2.842s 00:06:15.852 sys 0m0.731s 00:06:15.852 22:33:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.852 22:33:00 -- common/autotest_common.sh@10 -- # set +x 00:06:15.852 ************************************ 00:06:15.852 END TEST non_locking_app_on_locked_coremask 00:06:15.852 ************************************ 00:06:15.852 22:33:00 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:15.852 22:33:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.852 22:33:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.852 22:33:00 -- common/autotest_common.sh@10 -- # set +x 00:06:15.852 ************************************ 00:06:15.852 START TEST locking_app_on_unlocked_coremask 00:06:15.852 ************************************ 00:06:15.852 22:33:00 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:15.852 22:33:00 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=901956 00:06:15.852 22:33:00 -- event/cpu_locks.sh@99 -- # waitforlisten 901956 /var/tmp/spdk.sock 00:06:15.852 22:33:00 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:15.852 22:33:00 -- common/autotest_common.sh@819 -- # '[' -z 901956 ']' 00:06:15.852 22:33:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.852 22:33:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:15.852 22:33:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.852 22:33:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:15.852 22:33:00 -- common/autotest_common.sh@10 -- # set +x 00:06:15.852 [2024-04-15 22:33:00.531551] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:15.852 [2024-04-15 22:33:00.531612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901956 ] 00:06:15.852 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.852 [2024-04-15 22:33:00.597957] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.852 [2024-04-15 22:33:00.597991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.113 [2024-04-15 22:33:00.664226] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:16.113 [2024-04-15 22:33:00.664368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.686 22:33:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:16.686 22:33:01 -- common/autotest_common.sh@852 -- # return 0 00:06:16.686 22:33:01 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:16.686 22:33:01 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=902021 00:06:16.686 22:33:01 -- event/cpu_locks.sh@103 -- # waitforlisten 902021 /var/tmp/spdk2.sock 00:06:16.686 22:33:01 -- common/autotest_common.sh@819 -- # '[' -z 902021 ']' 00:06:16.686 22:33:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.686 22:33:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:16.686 22:33:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.686 22:33:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:16.686 22:33:01 -- common/autotest_common.sh@10 -- # set +x 00:06:16.686 [2024-04-15 22:33:01.318267] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:16.686 [2024-04-15 22:33:01.318314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902021 ] 00:06:16.686 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.686 [2024-04-15 22:33:01.416382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.948 [2024-04-15 22:33:01.543479] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:16.948 [2024-04-15 22:33:01.543621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.521 22:33:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:17.521 22:33:02 -- common/autotest_common.sh@852 -- # return 0 00:06:17.521 22:33:02 -- event/cpu_locks.sh@105 -- # locks_exist 902021 00:06:17.521 22:33:02 -- event/cpu_locks.sh@22 -- # lslocks -p 902021 00:06:17.521 22:33:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.092 lslocks: write error 00:06:18.092 22:33:02 -- event/cpu_locks.sh@107 -- # killprocess 901956 00:06:18.092 22:33:02 -- common/autotest_common.sh@926 -- # '[' -z 901956 ']' 00:06:18.092 22:33:02 -- common/autotest_common.sh@930 -- # kill -0 901956 00:06:18.092 22:33:02 -- common/autotest_common.sh@931 -- # uname 00:06:18.092 22:33:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:18.093 22:33:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 901956 00:06:18.093 22:33:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:18.093 22:33:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:18.093 22:33:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 901956' 00:06:18.093 killing process with pid 901956 00:06:18.093 22:33:02 -- common/autotest_common.sh@945 -- # kill 901956 00:06:18.093 22:33:02 -- common/autotest_common.sh@950 -- # wait 901956 00:06:18.354 22:33:03 -- event/cpu_locks.sh@108 -- # killprocess 902021 00:06:18.354 22:33:03 -- common/autotest_common.sh@926 -- # '[' -z 902021 ']' 00:06:18.354 22:33:03 -- common/autotest_common.sh@930 -- # kill -0 902021 00:06:18.615 22:33:03 -- common/autotest_common.sh@931 -- # uname 00:06:18.616 22:33:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:18.616 22:33:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 902021 00:06:18.616 22:33:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:18.616 22:33:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:18.616 22:33:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 902021' 00:06:18.616 killing process with pid 902021 00:06:18.616 22:33:03 -- common/autotest_common.sh@945 -- # kill 902021 00:06:18.616 22:33:03 -- common/autotest_common.sh@950 -- # wait 902021 00:06:18.876 00:06:18.876 real 0m2.950s 00:06:18.876 user 0m3.170s 00:06:18.876 sys 0m0.906s 00:06:18.876 22:33:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.876 22:33:03 -- common/autotest_common.sh@10 -- # set +x 00:06:18.876 ************************************ 00:06:18.876 END TEST locking_app_on_unlocked_coremask 00:06:18.876 ************************************ 00:06:18.876 22:33:03 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:18.876 22:33:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:18.876 22:33:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.876 22:33:03 -- common/autotest_common.sh@10 -- # set +x 00:06:18.876 ************************************ 00:06:18.876 START TEST locking_app_on_locked_coremask 00:06:18.876 ************************************ 00:06:18.876 22:33:03 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:18.876 22:33:03 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=902571 00:06:18.876 22:33:03 -- event/cpu_locks.sh@116 -- # waitforlisten 902571 /var/tmp/spdk.sock 00:06:18.876 22:33:03 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.876 22:33:03 -- common/autotest_common.sh@819 -- # '[' -z 902571 ']' 00:06:18.876 22:33:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.876 22:33:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:18.876 22:33:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.876 22:33:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:18.876 22:33:03 -- common/autotest_common.sh@10 -- # set +x 00:06:18.876 [2024-04-15 22:33:03.524677] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:18.876 [2024-04-15 22:33:03.524735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902571 ] 00:06:18.876 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.876 [2024-04-15 22:33:03.590117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.876 [2024-04-15 22:33:03.652610] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:18.876 [2024-04-15 22:33:03.652742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.819 22:33:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:19.819 22:33:04 -- common/autotest_common.sh@852 -- # return 0 00:06:19.819 22:33:04 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:19.819 22:33:04 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=902757 00:06:19.819 22:33:04 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 902757 /var/tmp/spdk2.sock 00:06:19.819 22:33:04 -- common/autotest_common.sh@640 -- # local es=0 00:06:19.819 22:33:04 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 902757 /var/tmp/spdk2.sock 00:06:19.819 22:33:04 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:19.819 22:33:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:19.819 22:33:04 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:19.819 22:33:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:19.819 22:33:04 -- common/autotest_common.sh@643 -- # waitforlisten 902757 /var/tmp/spdk2.sock 00:06:19.819 22:33:04 -- common/autotest_common.sh@819 -- # '[' -z 902757 ']' 00:06:19.819 22:33:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.819 22:33:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.819 22:33:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.819 22:33:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.819 22:33:04 -- common/autotest_common.sh@10 -- # set +x 00:06:19.819 [2024-04-15 22:33:04.313021] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:19.819 [2024-04-15 22:33:04.313070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902757 ] 00:06:19.819 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.819 [2024-04-15 22:33:04.413046] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 902571 has claimed it. 00:06:19.819 [2024-04-15 22:33:04.413087] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (902757) - No such process 00:06:20.391 ERROR: process (pid: 902757) is no longer running 00:06:20.391 22:33:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:20.391 22:33:04 -- common/autotest_common.sh@852 -- # return 1 00:06:20.391 22:33:04 -- common/autotest_common.sh@643 -- # es=1 00:06:20.391 22:33:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:20.391 22:33:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:20.391 22:33:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:20.391 22:33:04 -- event/cpu_locks.sh@122 -- # locks_exist 902571 00:06:20.391 22:33:04 -- event/cpu_locks.sh@22 -- # lslocks -p 902571 00:06:20.391 22:33:04 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.655 lslocks: write error 00:06:20.655 22:33:05 -- event/cpu_locks.sh@124 -- # killprocess 902571 00:06:20.655 22:33:05 -- common/autotest_common.sh@926 -- # '[' -z 902571 ']' 00:06:20.655 22:33:05 -- common/autotest_common.sh@930 -- # kill -0 902571 00:06:20.655 22:33:05 -- common/autotest_common.sh@931 -- # uname 00:06:20.655 22:33:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:20.655 22:33:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 902571 00:06:20.655 22:33:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:20.655 22:33:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:20.655 22:33:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 902571' 00:06:20.655 killing process with pid 902571 00:06:20.655 22:33:05 -- common/autotest_common.sh@945 -- # kill 902571 00:06:20.655 22:33:05 -- common/autotest_common.sh@950 -- # wait 902571 00:06:20.914 00:06:20.914 real 0m2.069s 00:06:20.914 user 0m2.286s 00:06:20.914 sys 0m0.563s 00:06:20.914 22:33:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.914 22:33:05 -- common/autotest_common.sh@10 -- # set +x 00:06:20.914 ************************************ 00:06:20.914 END TEST locking_app_on_locked_coremask 00:06:20.914 ************************************ 00:06:20.914 22:33:05 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:20.914 22:33:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:20.914 22:33:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:20.914 22:33:05 -- common/autotest_common.sh@10 -- # set +x 00:06:20.914 ************************************ 00:06:20.914 START TEST locking_overlapped_coremask 00:06:20.914 ************************************ 00:06:20.914 22:33:05 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:20.914 22:33:05 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=903125 00:06:20.914 22:33:05 -- event/cpu_locks.sh@133 -- # waitforlisten 903125 /var/tmp/spdk.sock 00:06:20.914 22:33:05 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:20.914 22:33:05 -- common/autotest_common.sh@819 -- # '[' -z 903125 ']' 00:06:20.914 22:33:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.914 22:33:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:20.914 22:33:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.914 22:33:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:20.914 22:33:05 -- common/autotest_common.sh@10 -- # set +x 00:06:20.914 [2024-04-15 22:33:05.636394] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:20.914 [2024-04-15 22:33:05.636448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903125 ] 00:06:20.914 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.914 [2024-04-15 22:33:05.702415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.174 [2024-04-15 22:33:05.766297] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:21.174 [2024-04-15 22:33:05.766537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.174 [2024-04-15 22:33:05.766677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.174 [2024-04-15 22:33:05.766770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.745 22:33:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:21.745 22:33:06 -- common/autotest_common.sh@852 -- # return 0 00:06:21.745 22:33:06 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:21.745 22:33:06 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=903140 00:06:21.745 22:33:06 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 903140 /var/tmp/spdk2.sock 00:06:21.745 22:33:06 -- common/autotest_common.sh@640 -- # local es=0 00:06:21.745 22:33:06 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 903140 /var/tmp/spdk2.sock 00:06:21.745 22:33:06 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:21.745 22:33:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:21.745 22:33:06 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:21.745 22:33:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:21.745 22:33:06 -- common/autotest_common.sh@643 -- # waitforlisten 903140 /var/tmp/spdk2.sock 00:06:21.745 22:33:06 -- common/autotest_common.sh@819 -- # '[' -z 903140 ']' 00:06:21.745 22:33:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.745 22:33:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.745 22:33:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.745 22:33:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.745 22:33:06 -- common/autotest_common.sh@10 -- # set +x 00:06:21.745 [2024-04-15 22:33:06.437528] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:21.746 [2024-04-15 22:33:06.437586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903140 ] 00:06:21.746 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.746 [2024-04-15 22:33:06.518105] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 903125 has claimed it. 00:06:21.746 [2024-04-15 22:33:06.518137] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:22.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (903140) - No such process 00:06:22.318 ERROR: process (pid: 903140) is no longer running 00:06:22.318 22:33:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:22.318 22:33:07 -- common/autotest_common.sh@852 -- # return 1 00:06:22.318 22:33:07 -- common/autotest_common.sh@643 -- # es=1 00:06:22.318 22:33:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:22.318 22:33:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:22.318 22:33:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:22.318 22:33:07 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:22.318 22:33:07 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:22.318 22:33:07 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:22.318 22:33:07 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:22.318 22:33:07 -- event/cpu_locks.sh@141 -- # killprocess 903125 00:06:22.318 22:33:07 -- common/autotest_common.sh@926 -- # '[' -z 903125 ']' 00:06:22.318 22:33:07 -- common/autotest_common.sh@930 -- # kill -0 903125 00:06:22.318 22:33:07 -- common/autotest_common.sh@931 -- # uname 00:06:22.318 22:33:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:22.318 22:33:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 903125 00:06:22.318 22:33:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:22.318 22:33:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:22.318 22:33:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 903125' 00:06:22.318 killing process with pid 903125 00:06:22.318 22:33:07 -- common/autotest_common.sh@945 -- # kill 903125 00:06:22.318 22:33:07 -- common/autotest_common.sh@950 -- # wait 903125 00:06:22.579 00:06:22.579 real 0m1.737s 00:06:22.579 user 0m4.936s 00:06:22.579 sys 0m0.341s 00:06:22.579 22:33:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.579 22:33:07 -- common/autotest_common.sh@10 -- # set +x 00:06:22.579 ************************************ 00:06:22.579 END TEST locking_overlapped_coremask 00:06:22.579 ************************************ 00:06:22.579 22:33:07 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:22.579 22:33:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:22.579 22:33:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:22.579 22:33:07 -- common/autotest_common.sh@10 -- # set +x 00:06:22.579 ************************************ 00:06:22.579 START TEST locking_overlapped_coremask_via_rpc 00:06:22.579 ************************************ 00:06:22.579 22:33:07 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:22.579 22:33:07 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=903602 00:06:22.579 22:33:07 -- event/cpu_locks.sh@149 -- # waitforlisten 903602 /var/tmp/spdk.sock 00:06:22.579 22:33:07 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:22.579 22:33:07 -- common/autotest_common.sh@819 -- # '[' -z 903602 ']' 00:06:22.579 22:33:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.579 22:33:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:22.579 22:33:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.579 22:33:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:22.579 22:33:07 -- common/autotest_common.sh@10 -- # set +x 00:06:22.875 [2024-04-15 22:33:07.418274] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:22.875 [2024-04-15 22:33:07.418331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903602 ] 00:06:22.875 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.875 [2024-04-15 22:33:07.484500] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.875 [2024-04-15 22:33:07.484529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.875 [2024-04-15 22:33:07.549701] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:22.875 [2024-04-15 22:33:07.549848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.875 [2024-04-15 22:33:07.549981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.875 [2024-04-15 22:33:07.549984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.446 22:33:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:23.446 22:33:08 -- common/autotest_common.sh@852 -- # return 0 00:06:23.446 22:33:08 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=903845 00:06:23.446 22:33:08 -- event/cpu_locks.sh@153 -- # waitforlisten 903845 /var/tmp/spdk2.sock 00:06:23.446 22:33:08 -- common/autotest_common.sh@819 -- # '[' -z 903845 ']' 00:06:23.446 22:33:08 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:23.446 22:33:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.446 22:33:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:23.446 22:33:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.446 22:33:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:23.446 22:33:08 -- common/autotest_common.sh@10 -- # set +x 00:06:23.446 [2024-04-15 22:33:08.242754] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:23.446 [2024-04-15 22:33:08.242806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903845 ] 00:06:23.707 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.707 [2024-04-15 22:33:08.317460] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.707 [2024-04-15 22:33:08.317478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.707 [2024-04-15 22:33:08.426651] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:23.707 [2024-04-15 22:33:08.426896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.707 [2024-04-15 22:33:08.427054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.707 [2024-04-15 22:33:08.427057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:24.278 22:33:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.278 22:33:08 -- common/autotest_common.sh@852 -- # return 0 00:06:24.278 22:33:08 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:24.278 22:33:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.278 22:33:08 -- common/autotest_common.sh@10 -- # set +x 00:06:24.278 22:33:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.278 22:33:09 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.278 22:33:09 -- common/autotest_common.sh@640 -- # local es=0 00:06:24.278 22:33:09 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.278 22:33:09 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:24.278 22:33:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:24.278 22:33:09 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:24.278 22:33:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:24.278 22:33:09 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.278 22:33:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.278 22:33:09 -- common/autotest_common.sh@10 -- # set +x 00:06:24.278 [2024-04-15 22:33:09.016592] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 903602 has claimed it. 00:06:24.278 request: 00:06:24.278 { 00:06:24.278 "method": "framework_enable_cpumask_locks", 00:06:24.278 "req_id": 1 00:06:24.278 } 00:06:24.278 Got JSON-RPC error response 00:06:24.278 response: 00:06:24.278 { 00:06:24.278 "code": -32603, 00:06:24.278 "message": "Failed to claim CPU core: 2" 00:06:24.278 } 00:06:24.278 22:33:09 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:24.278 22:33:09 -- common/autotest_common.sh@643 -- # es=1 00:06:24.278 22:33:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:24.278 22:33:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:24.278 22:33:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:24.278 22:33:09 -- event/cpu_locks.sh@158 -- # waitforlisten 903602 /var/tmp/spdk.sock 00:06:24.278 22:33:09 -- common/autotest_common.sh@819 -- # '[' -z 903602 ']' 00:06:24.278 22:33:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.278 22:33:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.278 22:33:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.278 22:33:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.278 22:33:09 -- common/autotest_common.sh@10 -- # set +x 00:06:24.539 22:33:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.539 22:33:09 -- common/autotest_common.sh@852 -- # return 0 00:06:24.539 22:33:09 -- event/cpu_locks.sh@159 -- # waitforlisten 903845 /var/tmp/spdk2.sock 00:06:24.539 22:33:09 -- common/autotest_common.sh@819 -- # '[' -z 903845 ']' 00:06:24.539 22:33:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.539 22:33:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.539 22:33:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.539 22:33:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.539 22:33:09 -- common/autotest_common.sh@10 -- # set +x 00:06:24.800 22:33:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.800 22:33:09 -- common/autotest_common.sh@852 -- # return 0 00:06:24.800 22:33:09 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:24.800 22:33:09 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:24.800 22:33:09 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:24.800 22:33:09 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:24.800 00:06:24.800 real 0m1.990s 00:06:24.800 user 0m0.746s 00:06:24.800 sys 0m0.167s 00:06:24.800 22:33:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.800 22:33:09 -- common/autotest_common.sh@10 -- # set +x 00:06:24.800 ************************************ 00:06:24.800 END TEST locking_overlapped_coremask_via_rpc 00:06:24.800 ************************************ 00:06:24.800 22:33:09 -- event/cpu_locks.sh@174 -- # cleanup 00:06:24.800 22:33:09 -- event/cpu_locks.sh@15 -- # [[ -z 903602 ]] 00:06:24.800 22:33:09 -- event/cpu_locks.sh@15 -- # killprocess 903602 00:06:24.800 22:33:09 -- common/autotest_common.sh@926 -- # '[' -z 903602 ']' 00:06:24.800 22:33:09 -- common/autotest_common.sh@930 -- # kill -0 903602 00:06:24.800 22:33:09 -- common/autotest_common.sh@931 -- # uname 00:06:24.800 22:33:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:24.800 22:33:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 903602 00:06:24.800 22:33:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:24.800 22:33:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:24.800 22:33:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 903602' 00:06:24.800 killing process with pid 903602 00:06:24.800 22:33:09 -- common/autotest_common.sh@945 -- # kill 903602 00:06:24.800 22:33:09 -- common/autotest_common.sh@950 -- # wait 903602 00:06:25.061 22:33:09 -- event/cpu_locks.sh@16 -- # [[ -z 903845 ]] 00:06:25.061 22:33:09 -- event/cpu_locks.sh@16 -- # killprocess 903845 00:06:25.061 22:33:09 -- common/autotest_common.sh@926 -- # '[' -z 903845 ']' 00:06:25.061 22:33:09 -- common/autotest_common.sh@930 -- # kill -0 903845 00:06:25.061 22:33:09 -- common/autotest_common.sh@931 -- # uname 00:06:25.061 22:33:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:25.061 22:33:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 903845 00:06:25.061 22:33:09 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:25.061 22:33:09 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:25.061 22:33:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 903845' 00:06:25.061 killing process with pid 903845 00:06:25.061 22:33:09 -- common/autotest_common.sh@945 -- # kill 903845 00:06:25.061 22:33:09 -- common/autotest_common.sh@950 -- # wait 903845 00:06:25.322 22:33:09 -- event/cpu_locks.sh@18 -- # rm -f 00:06:25.322 22:33:09 -- event/cpu_locks.sh@1 -- # cleanup 00:06:25.322 22:33:09 -- event/cpu_locks.sh@15 -- # [[ -z 903602 ]] 00:06:25.322 22:33:09 -- event/cpu_locks.sh@15 -- # killprocess 903602 00:06:25.322 22:33:09 -- common/autotest_common.sh@926 -- # '[' -z 903602 ']' 00:06:25.322 22:33:09 -- common/autotest_common.sh@930 -- # kill -0 903602 00:06:25.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (903602) - No such process 00:06:25.322 22:33:09 -- common/autotest_common.sh@953 -- # echo 'Process with pid 903602 is not found' 00:06:25.322 Process with pid 903602 is not found 00:06:25.322 22:33:09 -- event/cpu_locks.sh@16 -- # [[ -z 903845 ]] 00:06:25.322 22:33:09 -- event/cpu_locks.sh@16 -- # killprocess 903845 00:06:25.322 22:33:09 -- common/autotest_common.sh@926 -- # '[' -z 903845 ']' 00:06:25.323 22:33:09 -- common/autotest_common.sh@930 -- # kill -0 903845 00:06:25.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (903845) - No such process 00:06:25.323 22:33:09 -- common/autotest_common.sh@953 -- # echo 'Process with pid 903845 is not found' 00:06:25.323 Process with pid 903845 is not found 00:06:25.323 22:33:09 -- event/cpu_locks.sh@18 -- # rm -f 00:06:25.323 00:06:25.323 real 0m15.095s 00:06:25.323 user 0m26.415s 00:06:25.323 sys 0m4.407s 00:06:25.323 22:33:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.323 22:33:09 -- common/autotest_common.sh@10 -- # set +x 00:06:25.323 ************************************ 00:06:25.323 END TEST cpu_locks 00:06:25.323 ************************************ 00:06:25.323 00:06:25.323 real 0m41.079s 00:06:25.323 user 1m21.947s 00:06:25.323 sys 0m7.432s 00:06:25.323 22:33:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.323 22:33:09 -- common/autotest_common.sh@10 -- # set +x 00:06:25.323 ************************************ 00:06:25.323 END TEST event 00:06:25.323 ************************************ 00:06:25.323 22:33:10 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:25.323 22:33:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:25.323 22:33:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:25.323 22:33:10 -- common/autotest_common.sh@10 -- # set +x 00:06:25.323 ************************************ 00:06:25.323 START TEST thread 00:06:25.323 ************************************ 00:06:25.323 22:33:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:25.323 * Looking for test storage... 00:06:25.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:25.323 22:33:10 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:25.323 22:33:10 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:25.323 22:33:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:25.323 22:33:10 -- common/autotest_common.sh@10 -- # set +x 00:06:25.323 ************************************ 00:06:25.323 START TEST thread_poller_perf 00:06:25.323 ************************************ 00:06:25.323 22:33:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:25.323 [2024-04-15 22:33:10.130989] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:25.323 [2024-04-15 22:33:10.131113] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904551 ] 00:06:25.584 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.584 [2024-04-15 22:33:10.205538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.584 [2024-04-15 22:33:10.277802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.584 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:26.967 ====================================== 00:06:26.967 busy:2408930144 (cyc) 00:06:26.967 total_run_count: 276000 00:06:26.967 tsc_hz: 2400000000 (cyc) 00:06:26.967 ====================================== 00:06:26.967 poller_cost: 8728 (cyc), 3636 (nsec) 00:06:26.967 00:06:26.967 real 0m1.229s 00:06:26.967 user 0m1.137s 00:06:26.967 sys 0m0.087s 00:06:26.967 22:33:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.967 22:33:11 -- common/autotest_common.sh@10 -- # set +x 00:06:26.967 ************************************ 00:06:26.967 END TEST thread_poller_perf 00:06:26.967 ************************************ 00:06:26.967 22:33:11 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.967 22:33:11 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:26.967 22:33:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.967 22:33:11 -- common/autotest_common.sh@10 -- # set +x 00:06:26.967 ************************************ 00:06:26.967 START TEST thread_poller_perf 00:06:26.967 ************************************ 00:06:26.967 22:33:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.967 [2024-04-15 22:33:11.400440] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:26.967 [2024-04-15 22:33:11.400557] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904769 ] 00:06:26.967 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.967 [2024-04-15 22:33:11.469343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.967 [2024-04-15 22:33:11.534861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.967 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:27.908 ====================================== 00:06:27.908 busy:2402317906 (cyc) 00:06:27.908 total_run_count: 3809000 00:06:27.908 tsc_hz: 2400000000 (cyc) 00:06:27.908 ====================================== 00:06:27.908 poller_cost: 630 (cyc), 262 (nsec) 00:06:27.908 00:06:27.908 real 0m1.208s 00:06:27.908 user 0m1.134s 00:06:27.908 sys 0m0.070s 00:06:27.908 22:33:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.908 22:33:12 -- common/autotest_common.sh@10 -- # set +x 00:06:27.908 ************************************ 00:06:27.908 END TEST thread_poller_perf 00:06:27.908 ************************************ 00:06:27.908 22:33:12 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:27.908 00:06:27.908 real 0m2.610s 00:06:27.908 user 0m2.349s 00:06:27.908 sys 0m0.271s 00:06:27.908 22:33:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.908 22:33:12 -- common/autotest_common.sh@10 -- # set +x 00:06:27.908 ************************************ 00:06:27.908 END TEST thread 00:06:27.908 ************************************ 00:06:27.908 22:33:12 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:27.908 22:33:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:27.908 22:33:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.908 22:33:12 -- common/autotest_common.sh@10 -- # set +x 00:06:27.908 ************************************ 00:06:27.908 START TEST accel 00:06:27.908 ************************************ 00:06:27.908 22:33:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:28.168 * Looking for test storage... 00:06:28.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:28.168 22:33:12 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:28.168 22:33:12 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:28.168 22:33:12 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:28.168 22:33:12 -- accel/accel.sh@59 -- # spdk_tgt_pid=905166 00:06:28.168 22:33:12 -- accel/accel.sh@60 -- # waitforlisten 905166 00:06:28.168 22:33:12 -- common/autotest_common.sh@819 -- # '[' -z 905166 ']' 00:06:28.168 22:33:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.168 22:33:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.168 22:33:12 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:28.168 22:33:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.168 22:33:12 -- accel/accel.sh@58 -- # build_accel_config 00:06:28.168 22:33:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.168 22:33:12 -- common/autotest_common.sh@10 -- # set +x 00:06:28.168 22:33:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.168 22:33:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.168 22:33:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.168 22:33:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.168 22:33:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.168 22:33:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.168 22:33:12 -- accel/accel.sh@42 -- # jq -r . 00:06:28.168 [2024-04-15 22:33:12.811857] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:28.168 [2024-04-15 22:33:12.811934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905166 ] 00:06:28.168 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.168 [2024-04-15 22:33:12.882537] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.168 [2024-04-15 22:33:12.954151] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.168 [2024-04-15 22:33:12.954279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.111 22:33:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.111 22:33:13 -- common/autotest_common.sh@852 -- # return 0 00:06:29.111 22:33:13 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:29.111 22:33:13 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:29.111 22:33:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:29.111 22:33:13 -- common/autotest_common.sh@10 -- # set +x 00:06:29.111 22:33:13 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:29.111 22:33:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:29.111 22:33:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # IFS== 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.111 22:33:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.111 22:33:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # IFS== 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.111 22:33:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.111 22:33:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # IFS== 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.111 22:33:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.111 22:33:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # IFS== 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.111 22:33:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.111 22:33:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # IFS== 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.111 22:33:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.111 22:33:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # IFS== 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.111 22:33:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.111 22:33:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # IFS== 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.111 22:33:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.111 22:33:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # IFS== 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.111 22:33:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.111 22:33:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # IFS== 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.111 22:33:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.111 22:33:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # IFS== 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.111 22:33:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.111 22:33:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # IFS== 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.111 22:33:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.111 22:33:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # IFS== 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.111 22:33:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.111 22:33:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # IFS== 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.111 22:33:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.111 22:33:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # IFS== 00:06:29.111 22:33:13 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.111 22:33:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.111 22:33:13 -- accel/accel.sh@67 -- # killprocess 905166 00:06:29.111 22:33:13 -- common/autotest_common.sh@926 -- # '[' -z 905166 ']' 00:06:29.111 22:33:13 -- common/autotest_common.sh@930 -- # kill -0 905166 00:06:29.111 22:33:13 -- common/autotest_common.sh@931 -- # uname 00:06:29.111 22:33:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:29.111 22:33:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 905166 00:06:29.111 22:33:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:29.111 22:33:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:29.111 22:33:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 905166' 00:06:29.111 killing process with pid 905166 00:06:29.111 22:33:13 -- common/autotest_common.sh@945 -- # kill 905166 00:06:29.111 22:33:13 -- common/autotest_common.sh@950 -- # wait 905166 00:06:29.111 22:33:13 -- accel/accel.sh@68 -- # trap - ERR 00:06:29.111 22:33:13 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:29.111 22:33:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:29.111 22:33:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.111 22:33:13 -- common/autotest_common.sh@10 -- # set +x 00:06:29.111 22:33:13 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:29.111 22:33:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:29.111 22:33:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.111 22:33:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.111 22:33:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.111 22:33:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.111 22:33:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.111 22:33:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.111 22:33:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.111 22:33:13 -- accel/accel.sh@42 -- # jq -r . 00:06:29.373 22:33:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.373 22:33:13 -- common/autotest_common.sh@10 -- # set +x 00:06:29.373 22:33:13 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:29.373 22:33:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:29.373 22:33:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.373 22:33:13 -- common/autotest_common.sh@10 -- # set +x 00:06:29.373 ************************************ 00:06:29.373 START TEST accel_missing_filename 00:06:29.373 ************************************ 00:06:29.373 22:33:13 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:29.373 22:33:13 -- common/autotest_common.sh@640 -- # local es=0 00:06:29.373 22:33:13 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:29.373 22:33:13 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:29.373 22:33:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:29.373 22:33:13 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:29.373 22:33:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:29.373 22:33:13 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:29.373 22:33:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:29.373 22:33:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.373 22:33:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.373 22:33:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.373 22:33:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.373 22:33:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.373 22:33:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.373 22:33:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.373 22:33:13 -- accel/accel.sh@42 -- # jq -r . 00:06:29.373 [2024-04-15 22:33:14.001262] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:29.373 [2024-04-15 22:33:14.001364] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905532 ] 00:06:29.373 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.373 [2024-04-15 22:33:14.070319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.373 [2024-04-15 22:33:14.137280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.373 [2024-04-15 22:33:14.169164] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.634 [2024-04-15 22:33:14.206972] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:29.634 A filename is required. 00:06:29.634 22:33:14 -- common/autotest_common.sh@643 -- # es=234 00:06:29.634 22:33:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:29.634 22:33:14 -- common/autotest_common.sh@652 -- # es=106 00:06:29.634 22:33:14 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:29.634 22:33:14 -- common/autotest_common.sh@660 -- # es=1 00:06:29.634 22:33:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:29.634 00:06:29.634 real 0m0.289s 00:06:29.634 user 0m0.219s 00:06:29.634 sys 0m0.111s 00:06:29.634 22:33:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.634 22:33:14 -- common/autotest_common.sh@10 -- # set +x 00:06:29.634 ************************************ 00:06:29.634 END TEST accel_missing_filename 00:06:29.634 ************************************ 00:06:29.634 22:33:14 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.634 22:33:14 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:29.634 22:33:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.634 22:33:14 -- common/autotest_common.sh@10 -- # set +x 00:06:29.634 ************************************ 00:06:29.634 START TEST accel_compress_verify 00:06:29.634 ************************************ 00:06:29.634 22:33:14 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.634 22:33:14 -- common/autotest_common.sh@640 -- # local es=0 00:06:29.634 22:33:14 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.634 22:33:14 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:29.634 22:33:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:29.634 22:33:14 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:29.634 22:33:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:29.634 22:33:14 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.634 22:33:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.634 22:33:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.634 22:33:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.634 22:33:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.634 22:33:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.634 22:33:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.634 22:33:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.634 22:33:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.634 22:33:14 -- accel/accel.sh@42 -- # jq -r . 00:06:29.634 [2024-04-15 22:33:14.333034] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:29.634 [2024-04-15 22:33:14.333133] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905552 ] 00:06:29.634 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.634 [2024-04-15 22:33:14.400888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.895 [2024-04-15 22:33:14.463634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.895 [2024-04-15 22:33:14.495351] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.895 [2024-04-15 22:33:14.533126] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:29.895 00:06:29.896 Compression does not support the verify option, aborting. 00:06:29.896 22:33:14 -- common/autotest_common.sh@643 -- # es=161 00:06:29.896 22:33:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:29.896 22:33:14 -- common/autotest_common.sh@652 -- # es=33 00:06:29.896 22:33:14 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:29.896 22:33:14 -- common/autotest_common.sh@660 -- # es=1 00:06:29.896 22:33:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:29.896 00:06:29.896 real 0m0.284s 00:06:29.896 user 0m0.215s 00:06:29.896 sys 0m0.110s 00:06:29.896 22:33:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.896 22:33:14 -- common/autotest_common.sh@10 -- # set +x 00:06:29.896 ************************************ 00:06:29.896 END TEST accel_compress_verify 00:06:29.896 ************************************ 00:06:29.896 22:33:14 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:29.896 22:33:14 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:29.896 22:33:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.896 22:33:14 -- common/autotest_common.sh@10 -- # set +x 00:06:29.896 ************************************ 00:06:29.896 START TEST accel_wrong_workload 00:06:29.896 ************************************ 00:06:29.896 22:33:14 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:29.896 22:33:14 -- common/autotest_common.sh@640 -- # local es=0 00:06:29.896 22:33:14 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:29.896 22:33:14 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:29.896 22:33:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:29.896 22:33:14 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:29.896 22:33:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:29.896 22:33:14 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:29.896 22:33:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:29.896 22:33:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.896 22:33:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.896 22:33:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.896 22:33:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.896 22:33:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.896 22:33:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.896 22:33:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.896 22:33:14 -- accel/accel.sh@42 -- # jq -r . 00:06:29.896 Unsupported workload type: foobar 00:06:29.896 [2024-04-15 22:33:14.656925] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:29.896 accel_perf options: 00:06:29.896 [-h help message] 00:06:29.896 [-q queue depth per core] 00:06:29.896 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:29.896 [-T number of threads per core 00:06:29.896 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:29.896 [-t time in seconds] 00:06:29.896 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:29.896 [ dif_verify, , dif_generate, dif_generate_copy 00:06:29.896 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:29.896 [-l for compress/decompress workloads, name of uncompressed input file 00:06:29.896 [-S for crc32c workload, use this seed value (default 0) 00:06:29.896 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:29.896 [-f for fill workload, use this BYTE value (default 255) 00:06:29.896 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:29.896 [-y verify result if this switch is on] 00:06:29.896 [-a tasks to allocate per core (default: same value as -q)] 00:06:29.896 Can be used to spread operations across a wider range of memory. 00:06:29.896 22:33:14 -- common/autotest_common.sh@643 -- # es=1 00:06:29.896 22:33:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:29.896 22:33:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:29.896 22:33:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:29.896 00:06:29.896 real 0m0.036s 00:06:29.896 user 0m0.020s 00:06:29.896 sys 0m0.016s 00:06:29.896 22:33:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.896 22:33:14 -- common/autotest_common.sh@10 -- # set +x 00:06:29.896 ************************************ 00:06:29.896 END TEST accel_wrong_workload 00:06:29.896 ************************************ 00:06:29.896 Error: writing output failed: Broken pipe 00:06:29.896 22:33:14 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:29.896 22:33:14 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:29.896 22:33:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.896 22:33:14 -- common/autotest_common.sh@10 -- # set +x 00:06:30.158 ************************************ 00:06:30.158 START TEST accel_negative_buffers 00:06:30.158 ************************************ 00:06:30.158 22:33:14 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:30.158 22:33:14 -- common/autotest_common.sh@640 -- # local es=0 00:06:30.158 22:33:14 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:30.158 22:33:14 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:30.158 22:33:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:30.158 22:33:14 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:30.158 22:33:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:30.158 22:33:14 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:30.158 22:33:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:30.158 22:33:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.158 22:33:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.158 22:33:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.158 22:33:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.158 22:33:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.158 22:33:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.158 22:33:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.158 22:33:14 -- accel/accel.sh@42 -- # jq -r . 00:06:30.158 -x option must be non-negative. 00:06:30.158 [2024-04-15 22:33:14.735256] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:30.158 accel_perf options: 00:06:30.158 [-h help message] 00:06:30.158 [-q queue depth per core] 00:06:30.158 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:30.158 [-T number of threads per core 00:06:30.158 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:30.158 [-t time in seconds] 00:06:30.158 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:30.158 [ dif_verify, , dif_generate, dif_generate_copy 00:06:30.158 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:30.158 [-l for compress/decompress workloads, name of uncompressed input file 00:06:30.158 [-S for crc32c workload, use this seed value (default 0) 00:06:30.158 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:30.158 [-f for fill workload, use this BYTE value (default 255) 00:06:30.158 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:30.158 [-y verify result if this switch is on] 00:06:30.158 [-a tasks to allocate per core (default: same value as -q)] 00:06:30.158 Can be used to spread operations across a wider range of memory. 00:06:30.158 22:33:14 -- common/autotest_common.sh@643 -- # es=1 00:06:30.158 22:33:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:30.158 22:33:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:30.158 22:33:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:30.158 00:06:30.158 real 0m0.035s 00:06:30.158 user 0m0.023s 00:06:30.158 sys 0m0.013s 00:06:30.158 22:33:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.158 22:33:14 -- common/autotest_common.sh@10 -- # set +x 00:06:30.158 ************************************ 00:06:30.158 END TEST accel_negative_buffers 00:06:30.158 ************************************ 00:06:30.158 Error: writing output failed: Broken pipe 00:06:30.158 22:33:14 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:30.158 22:33:14 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:30.158 22:33:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.158 22:33:14 -- common/autotest_common.sh@10 -- # set +x 00:06:30.158 ************************************ 00:06:30.158 START TEST accel_crc32c 00:06:30.158 ************************************ 00:06:30.158 22:33:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:30.158 22:33:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.158 22:33:14 -- accel/accel.sh@17 -- # local accel_module 00:06:30.158 22:33:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:30.158 22:33:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:30.158 22:33:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.158 22:33:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.158 22:33:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.158 22:33:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.158 22:33:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.158 22:33:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.158 22:33:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.158 22:33:14 -- accel/accel.sh@42 -- # jq -r . 00:06:30.158 [2024-04-15 22:33:14.808794] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:30.158 [2024-04-15 22:33:14.808854] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905616 ] 00:06:30.158 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.158 [2024-04-15 22:33:14.876650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.158 [2024-04-15 22:33:14.939299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.544 22:33:16 -- accel/accel.sh@18 -- # out=' 00:06:31.544 SPDK Configuration: 00:06:31.544 Core mask: 0x1 00:06:31.544 00:06:31.544 Accel Perf Configuration: 00:06:31.544 Workload Type: crc32c 00:06:31.544 CRC-32C seed: 32 00:06:31.544 Transfer size: 4096 bytes 00:06:31.544 Vector count 1 00:06:31.544 Module: software 00:06:31.544 Queue depth: 32 00:06:31.544 Allocate depth: 32 00:06:31.544 # threads/core: 1 00:06:31.544 Run time: 1 seconds 00:06:31.544 Verify: Yes 00:06:31.544 00:06:31.544 Running for 1 seconds... 00:06:31.544 00:06:31.544 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.544 ------------------------------------------------------------------------------------ 00:06:31.544 0,0 440608/s 1721 MiB/s 0 0 00:06:31.544 ==================================================================================== 00:06:31.544 Total 440608/s 1721 MiB/s 0 0' 00:06:31.544 22:33:16 -- accel/accel.sh@20 -- # IFS=: 00:06:31.544 22:33:16 -- accel/accel.sh@20 -- # read -r var val 00:06:31.544 22:33:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:31.544 22:33:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:31.544 22:33:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.544 22:33:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.544 22:33:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.544 22:33:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.544 22:33:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.544 22:33:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.544 22:33:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.544 22:33:16 -- accel/accel.sh@42 -- # jq -r . 00:06:31.544 [2024-04-15 22:33:16.092720] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:31.544 [2024-04-15 22:33:16.092800] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905950 ] 00:06:31.544 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.544 [2024-04-15 22:33:16.159711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.544 [2024-04-15 22:33:16.222528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.544 22:33:16 -- accel/accel.sh@21 -- # val= 00:06:31.544 22:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.544 22:33:16 -- accel/accel.sh@20 -- # IFS=: 00:06:31.544 22:33:16 -- accel/accel.sh@20 -- # read -r var val 00:06:31.545 22:33:16 -- accel/accel.sh@21 -- # val= 00:06:31.545 22:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # IFS=: 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # read -r var val 00:06:31.545 22:33:16 -- accel/accel.sh@21 -- # val=0x1 00:06:31.545 22:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # IFS=: 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # read -r var val 00:06:31.545 22:33:16 -- accel/accel.sh@21 -- # val= 00:06:31.545 22:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # IFS=: 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # read -r var val 00:06:31.545 22:33:16 -- accel/accel.sh@21 -- # val= 00:06:31.545 22:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # IFS=: 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # read -r var val 00:06:31.545 22:33:16 -- accel/accel.sh@21 -- # val=crc32c 00:06:31.545 22:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.545 22:33:16 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # IFS=: 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # read -r var val 00:06:31.545 22:33:16 -- accel/accel.sh@21 -- # val=32 00:06:31.545 22:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # IFS=: 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # read -r var val 00:06:31.545 22:33:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.545 22:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # IFS=: 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # read -r var val 00:06:31.545 22:33:16 -- accel/accel.sh@21 -- # val= 00:06:31.545 22:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # IFS=: 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # read -r var val 00:06:31.545 22:33:16 -- accel/accel.sh@21 -- # val=software 00:06:31.545 22:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.545 22:33:16 -- accel/accel.sh@23 -- # accel_module=software 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # IFS=: 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # read -r var val 00:06:31.545 22:33:16 -- accel/accel.sh@21 -- # val=32 00:06:31.545 22:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # IFS=: 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # read -r var val 00:06:31.545 22:33:16 -- accel/accel.sh@21 -- # val=32 00:06:31.545 22:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # IFS=: 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # read -r var val 00:06:31.545 22:33:16 -- accel/accel.sh@21 -- # val=1 00:06:31.545 22:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # IFS=: 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # read -r var val 00:06:31.545 22:33:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:31.545 22:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # IFS=: 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # read -r var val 00:06:31.545 22:33:16 -- accel/accel.sh@21 -- # val=Yes 00:06:31.545 22:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # IFS=: 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # read -r var val 00:06:31.545 22:33:16 -- accel/accel.sh@21 -- # val= 00:06:31.545 22:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # IFS=: 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # read -r var val 00:06:31.545 22:33:16 -- accel/accel.sh@21 -- # val= 00:06:31.545 22:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # IFS=: 00:06:31.545 22:33:16 -- accel/accel.sh@20 -- # read -r var val 00:06:32.929 22:33:17 -- accel/accel.sh@21 -- # val= 00:06:32.929 22:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.929 22:33:17 -- accel/accel.sh@20 -- # IFS=: 00:06:32.929 22:33:17 -- accel/accel.sh@20 -- # read -r var val 00:06:32.930 22:33:17 -- accel/accel.sh@21 -- # val= 00:06:32.930 22:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.930 22:33:17 -- accel/accel.sh@20 -- # IFS=: 00:06:32.930 22:33:17 -- accel/accel.sh@20 -- # read -r var val 00:06:32.930 22:33:17 -- accel/accel.sh@21 -- # val= 00:06:32.930 22:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.930 22:33:17 -- accel/accel.sh@20 -- # IFS=: 00:06:32.930 22:33:17 -- accel/accel.sh@20 -- # read -r var val 00:06:32.930 22:33:17 -- accel/accel.sh@21 -- # val= 00:06:32.930 22:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.930 22:33:17 -- accel/accel.sh@20 -- # IFS=: 00:06:32.930 22:33:17 -- accel/accel.sh@20 -- # read -r var val 00:06:32.930 22:33:17 -- accel/accel.sh@21 -- # val= 00:06:32.930 22:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.930 22:33:17 -- accel/accel.sh@20 -- # IFS=: 00:06:32.930 22:33:17 -- accel/accel.sh@20 -- # read -r var val 00:06:32.930 22:33:17 -- accel/accel.sh@21 -- # val= 00:06:32.930 22:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.930 22:33:17 -- accel/accel.sh@20 -- # IFS=: 00:06:32.930 22:33:17 -- accel/accel.sh@20 -- # read -r var val 00:06:32.930 22:33:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:32.930 22:33:17 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:32.930 22:33:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.930 00:06:32.930 real 0m2.571s 00:06:32.930 user 0m2.367s 00:06:32.930 sys 0m0.210s 00:06:32.930 22:33:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.930 22:33:17 -- common/autotest_common.sh@10 -- # set +x 00:06:32.930 ************************************ 00:06:32.930 END TEST accel_crc32c 00:06:32.930 ************************************ 00:06:32.930 22:33:17 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:32.930 22:33:17 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:32.930 22:33:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.930 22:33:17 -- common/autotest_common.sh@10 -- # set +x 00:06:32.930 ************************************ 00:06:32.930 START TEST accel_crc32c_C2 00:06:32.930 ************************************ 00:06:32.930 22:33:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:32.930 22:33:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.930 22:33:17 -- accel/accel.sh@17 -- # local accel_module 00:06:32.930 22:33:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:32.930 22:33:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:32.930 22:33:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.930 22:33:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.930 22:33:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.930 22:33:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.930 22:33:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.930 22:33:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.930 22:33:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.930 22:33:17 -- accel/accel.sh@42 -- # jq -r . 00:06:32.930 [2024-04-15 22:33:17.423070] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:32.930 [2024-04-15 22:33:17.423174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid906299 ] 00:06:32.930 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.930 [2024-04-15 22:33:17.490229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.930 [2024-04-15 22:33:17.551967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.870 22:33:18 -- accel/accel.sh@18 -- # out=' 00:06:33.870 SPDK Configuration: 00:06:33.870 Core mask: 0x1 00:06:33.870 00:06:33.870 Accel Perf Configuration: 00:06:33.870 Workload Type: crc32c 00:06:33.870 CRC-32C seed: 0 00:06:33.870 Transfer size: 4096 bytes 00:06:33.870 Vector count 2 00:06:33.870 Module: software 00:06:33.870 Queue depth: 32 00:06:33.870 Allocate depth: 32 00:06:33.870 # threads/core: 1 00:06:33.870 Run time: 1 seconds 00:06:33.870 Verify: Yes 00:06:33.870 00:06:33.870 Running for 1 seconds... 00:06:33.870 00:06:33.870 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.870 ------------------------------------------------------------------------------------ 00:06:33.870 0,0 375808/s 2936 MiB/s 0 0 00:06:33.870 ==================================================================================== 00:06:33.870 Total 375808/s 1468 MiB/s 0 0' 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # IFS=: 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # read -r var val 00:06:34.131 22:33:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:34.131 22:33:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:34.131 22:33:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.131 22:33:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.131 22:33:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.131 22:33:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.131 22:33:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.131 22:33:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.131 22:33:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.131 22:33:18 -- accel/accel.sh@42 -- # jq -r . 00:06:34.131 [2024-04-15 22:33:18.703474] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:34.131 [2024-04-15 22:33:18.703554] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid906582 ] 00:06:34.131 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.131 [2024-04-15 22:33:18.770348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.131 [2024-04-15 22:33:18.833940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.131 22:33:18 -- accel/accel.sh@21 -- # val= 00:06:34.131 22:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # IFS=: 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # read -r var val 00:06:34.131 22:33:18 -- accel/accel.sh@21 -- # val= 00:06:34.131 22:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # IFS=: 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # read -r var val 00:06:34.131 22:33:18 -- accel/accel.sh@21 -- # val=0x1 00:06:34.131 22:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # IFS=: 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # read -r var val 00:06:34.131 22:33:18 -- accel/accel.sh@21 -- # val= 00:06:34.131 22:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # IFS=: 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # read -r var val 00:06:34.131 22:33:18 -- accel/accel.sh@21 -- # val= 00:06:34.131 22:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # IFS=: 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # read -r var val 00:06:34.131 22:33:18 -- accel/accel.sh@21 -- # val=crc32c 00:06:34.131 22:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.131 22:33:18 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # IFS=: 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # read -r var val 00:06:34.131 22:33:18 -- accel/accel.sh@21 -- # val=0 00:06:34.131 22:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # IFS=: 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # read -r var val 00:06:34.131 22:33:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.131 22:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # IFS=: 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # read -r var val 00:06:34.131 22:33:18 -- accel/accel.sh@21 -- # val= 00:06:34.131 22:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # IFS=: 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # read -r var val 00:06:34.131 22:33:18 -- accel/accel.sh@21 -- # val=software 00:06:34.131 22:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.131 22:33:18 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # IFS=: 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # read -r var val 00:06:34.131 22:33:18 -- accel/accel.sh@21 -- # val=32 00:06:34.131 22:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # IFS=: 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # read -r var val 00:06:34.131 22:33:18 -- accel/accel.sh@21 -- # val=32 00:06:34.131 22:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # IFS=: 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # read -r var val 00:06:34.131 22:33:18 -- accel/accel.sh@21 -- # val=1 00:06:34.131 22:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # IFS=: 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # read -r var val 00:06:34.131 22:33:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.131 22:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # IFS=: 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # read -r var val 00:06:34.131 22:33:18 -- accel/accel.sh@21 -- # val=Yes 00:06:34.131 22:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # IFS=: 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # read -r var val 00:06:34.131 22:33:18 -- accel/accel.sh@21 -- # val= 00:06:34.131 22:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # IFS=: 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # read -r var val 00:06:34.131 22:33:18 -- accel/accel.sh@21 -- # val= 00:06:34.131 22:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # IFS=: 00:06:34.131 22:33:18 -- accel/accel.sh@20 -- # read -r var val 00:06:35.514 22:33:19 -- accel/accel.sh@21 -- # val= 00:06:35.514 22:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.514 22:33:19 -- accel/accel.sh@20 -- # IFS=: 00:06:35.514 22:33:19 -- accel/accel.sh@20 -- # read -r var val 00:06:35.514 22:33:19 -- accel/accel.sh@21 -- # val= 00:06:35.514 22:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.514 22:33:19 -- accel/accel.sh@20 -- # IFS=: 00:06:35.514 22:33:19 -- accel/accel.sh@20 -- # read -r var val 00:06:35.514 22:33:19 -- accel/accel.sh@21 -- # val= 00:06:35.514 22:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.514 22:33:19 -- accel/accel.sh@20 -- # IFS=: 00:06:35.514 22:33:19 -- accel/accel.sh@20 -- # read -r var val 00:06:35.514 22:33:19 -- accel/accel.sh@21 -- # val= 00:06:35.514 22:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.514 22:33:19 -- accel/accel.sh@20 -- # IFS=: 00:06:35.514 22:33:19 -- accel/accel.sh@20 -- # read -r var val 00:06:35.514 22:33:19 -- accel/accel.sh@21 -- # val= 00:06:35.514 22:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.514 22:33:19 -- accel/accel.sh@20 -- # IFS=: 00:06:35.514 22:33:19 -- accel/accel.sh@20 -- # read -r var val 00:06:35.514 22:33:19 -- accel/accel.sh@21 -- # val= 00:06:35.514 22:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.514 22:33:19 -- accel/accel.sh@20 -- # IFS=: 00:06:35.514 22:33:19 -- accel/accel.sh@20 -- # read -r var val 00:06:35.514 22:33:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.514 22:33:19 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:35.514 22:33:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.514 00:06:35.514 real 0m2.569s 00:06:35.514 user 0m2.358s 00:06:35.514 sys 0m0.216s 00:06:35.514 22:33:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.514 22:33:19 -- common/autotest_common.sh@10 -- # set +x 00:06:35.514 ************************************ 00:06:35.514 END TEST accel_crc32c_C2 00:06:35.514 ************************************ 00:06:35.514 22:33:20 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:35.514 22:33:20 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:35.514 22:33:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.514 22:33:20 -- common/autotest_common.sh@10 -- # set +x 00:06:35.514 ************************************ 00:06:35.514 START TEST accel_copy 00:06:35.514 ************************************ 00:06:35.514 22:33:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:35.514 22:33:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.514 22:33:20 -- accel/accel.sh@17 -- # local accel_module 00:06:35.514 22:33:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:35.514 22:33:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:35.514 22:33:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.514 22:33:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.514 22:33:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.514 22:33:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.514 22:33:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.514 22:33:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.514 22:33:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.514 22:33:20 -- accel/accel.sh@42 -- # jq -r . 00:06:35.514 [2024-04-15 22:33:20.036986] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:35.514 [2024-04-15 22:33:20.037055] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid906768 ] 00:06:35.514 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.514 [2024-04-15 22:33:20.104310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.514 [2024-04-15 22:33:20.168697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.900 22:33:21 -- accel/accel.sh@18 -- # out=' 00:06:36.900 SPDK Configuration: 00:06:36.900 Core mask: 0x1 00:06:36.900 00:06:36.900 Accel Perf Configuration: 00:06:36.900 Workload Type: copy 00:06:36.900 Transfer size: 4096 bytes 00:06:36.900 Vector count 1 00:06:36.900 Module: software 00:06:36.900 Queue depth: 32 00:06:36.900 Allocate depth: 32 00:06:36.900 # threads/core: 1 00:06:36.900 Run time: 1 seconds 00:06:36.900 Verify: Yes 00:06:36.900 00:06:36.900 Running for 1 seconds... 00:06:36.900 00:06:36.900 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.900 ------------------------------------------------------------------------------------ 00:06:36.900 0,0 305152/s 1192 MiB/s 0 0 00:06:36.900 ==================================================================================== 00:06:36.900 Total 305152/s 1192 MiB/s 0 0' 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # IFS=: 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # read -r var val 00:06:36.900 22:33:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:36.900 22:33:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:36.900 22:33:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.900 22:33:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.900 22:33:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.900 22:33:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.900 22:33:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.900 22:33:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.900 22:33:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.900 22:33:21 -- accel/accel.sh@42 -- # jq -r . 00:06:36.900 [2024-04-15 22:33:21.321460] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:36.900 [2024-04-15 22:33:21.321568] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907012 ] 00:06:36.900 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.900 [2024-04-15 22:33:21.388642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.900 [2024-04-15 22:33:21.451597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.900 22:33:21 -- accel/accel.sh@21 -- # val= 00:06:36.900 22:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # IFS=: 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # read -r var val 00:06:36.900 22:33:21 -- accel/accel.sh@21 -- # val= 00:06:36.900 22:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # IFS=: 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # read -r var val 00:06:36.900 22:33:21 -- accel/accel.sh@21 -- # val=0x1 00:06:36.900 22:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # IFS=: 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # read -r var val 00:06:36.900 22:33:21 -- accel/accel.sh@21 -- # val= 00:06:36.900 22:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # IFS=: 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # read -r var val 00:06:36.900 22:33:21 -- accel/accel.sh@21 -- # val= 00:06:36.900 22:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # IFS=: 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # read -r var val 00:06:36.900 22:33:21 -- accel/accel.sh@21 -- # val=copy 00:06:36.900 22:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.900 22:33:21 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # IFS=: 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # read -r var val 00:06:36.900 22:33:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.900 22:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # IFS=: 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # read -r var val 00:06:36.900 22:33:21 -- accel/accel.sh@21 -- # val= 00:06:36.900 22:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # IFS=: 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # read -r var val 00:06:36.900 22:33:21 -- accel/accel.sh@21 -- # val=software 00:06:36.900 22:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.900 22:33:21 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # IFS=: 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # read -r var val 00:06:36.900 22:33:21 -- accel/accel.sh@21 -- # val=32 00:06:36.900 22:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # IFS=: 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # read -r var val 00:06:36.900 22:33:21 -- accel/accel.sh@21 -- # val=32 00:06:36.900 22:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # IFS=: 00:06:36.900 22:33:21 -- accel/accel.sh@20 -- # read -r var val 00:06:36.900 22:33:21 -- accel/accel.sh@21 -- # val=1 00:06:36.901 22:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.901 22:33:21 -- accel/accel.sh@20 -- # IFS=: 00:06:36.901 22:33:21 -- accel/accel.sh@20 -- # read -r var val 00:06:36.901 22:33:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.901 22:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.901 22:33:21 -- accel/accel.sh@20 -- # IFS=: 00:06:36.901 22:33:21 -- accel/accel.sh@20 -- # read -r var val 00:06:36.901 22:33:21 -- accel/accel.sh@21 -- # val=Yes 00:06:36.901 22:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.901 22:33:21 -- accel/accel.sh@20 -- # IFS=: 00:06:36.901 22:33:21 -- accel/accel.sh@20 -- # read -r var val 00:06:36.901 22:33:21 -- accel/accel.sh@21 -- # val= 00:06:36.901 22:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.901 22:33:21 -- accel/accel.sh@20 -- # IFS=: 00:06:36.901 22:33:21 -- accel/accel.sh@20 -- # read -r var val 00:06:36.901 22:33:21 -- accel/accel.sh@21 -- # val= 00:06:36.901 22:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.901 22:33:21 -- accel/accel.sh@20 -- # IFS=: 00:06:36.901 22:33:21 -- accel/accel.sh@20 -- # read -r var val 00:06:37.848 22:33:22 -- accel/accel.sh@21 -- # val= 00:06:37.848 22:33:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.848 22:33:22 -- accel/accel.sh@20 -- # IFS=: 00:06:37.848 22:33:22 -- accel/accel.sh@20 -- # read -r var val 00:06:37.848 22:33:22 -- accel/accel.sh@21 -- # val= 00:06:37.848 22:33:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.848 22:33:22 -- accel/accel.sh@20 -- # IFS=: 00:06:37.848 22:33:22 -- accel/accel.sh@20 -- # read -r var val 00:06:37.848 22:33:22 -- accel/accel.sh@21 -- # val= 00:06:37.848 22:33:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.848 22:33:22 -- accel/accel.sh@20 -- # IFS=: 00:06:37.848 22:33:22 -- accel/accel.sh@20 -- # read -r var val 00:06:37.848 22:33:22 -- accel/accel.sh@21 -- # val= 00:06:37.848 22:33:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.848 22:33:22 -- accel/accel.sh@20 -- # IFS=: 00:06:37.848 22:33:22 -- accel/accel.sh@20 -- # read -r var val 00:06:37.848 22:33:22 -- accel/accel.sh@21 -- # val= 00:06:37.848 22:33:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.848 22:33:22 -- accel/accel.sh@20 -- # IFS=: 00:06:37.848 22:33:22 -- accel/accel.sh@20 -- # read -r var val 00:06:37.848 22:33:22 -- accel/accel.sh@21 -- # val= 00:06:37.848 22:33:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.848 22:33:22 -- accel/accel.sh@20 -- # IFS=: 00:06:37.848 22:33:22 -- accel/accel.sh@20 -- # read -r var val 00:06:37.848 22:33:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:37.848 22:33:22 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:37.848 22:33:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.848 00:06:37.848 real 0m2.573s 00:06:37.848 user 0m2.377s 00:06:37.848 sys 0m0.200s 00:06:37.848 22:33:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.848 22:33:22 -- common/autotest_common.sh@10 -- # set +x 00:06:37.848 ************************************ 00:06:37.848 END TEST accel_copy 00:06:37.848 ************************************ 00:06:37.848 22:33:22 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:37.848 22:33:22 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:37.848 22:33:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.848 22:33:22 -- common/autotest_common.sh@10 -- # set +x 00:06:37.848 ************************************ 00:06:37.848 START TEST accel_fill 00:06:37.848 ************************************ 00:06:37.848 22:33:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:37.848 22:33:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.848 22:33:22 -- accel/accel.sh@17 -- # local accel_module 00:06:37.848 22:33:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:37.848 22:33:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:37.848 22:33:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.848 22:33:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.848 22:33:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.848 22:33:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.848 22:33:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.848 22:33:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.848 22:33:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.848 22:33:22 -- accel/accel.sh@42 -- # jq -r . 00:06:37.848 [2024-04-15 22:33:22.652926] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:37.848 [2024-04-15 22:33:22.653013] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907361 ] 00:06:38.108 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.108 [2024-04-15 22:33:22.721356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.108 [2024-04-15 22:33:22.784345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.492 22:33:23 -- accel/accel.sh@18 -- # out=' 00:06:39.492 SPDK Configuration: 00:06:39.492 Core mask: 0x1 00:06:39.492 00:06:39.492 Accel Perf Configuration: 00:06:39.492 Workload Type: fill 00:06:39.492 Fill pattern: 0x80 00:06:39.492 Transfer size: 4096 bytes 00:06:39.492 Vector count 1 00:06:39.492 Module: software 00:06:39.492 Queue depth: 64 00:06:39.492 Allocate depth: 64 00:06:39.492 # threads/core: 1 00:06:39.492 Run time: 1 seconds 00:06:39.492 Verify: Yes 00:06:39.492 00:06:39.492 Running for 1 seconds... 00:06:39.492 00:06:39.492 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:39.492 ------------------------------------------------------------------------------------ 00:06:39.492 0,0 470720/s 1838 MiB/s 0 0 00:06:39.492 ==================================================================================== 00:06:39.492 Total 470720/s 1838 MiB/s 0 0' 00:06:39.492 22:33:23 -- accel/accel.sh@20 -- # IFS=: 00:06:39.493 22:33:23 -- accel/accel.sh@20 -- # read -r var val 00:06:39.493 22:33:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.493 22:33:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.493 22:33:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.493 22:33:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.493 22:33:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.493 22:33:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.493 22:33:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.493 22:33:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.493 22:33:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.493 22:33:23 -- accel/accel.sh@42 -- # jq -r . 00:06:39.493 [2024-04-15 22:33:23.938335] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:39.493 [2024-04-15 22:33:23.938444] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907703 ] 00:06:39.493 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.493 [2024-04-15 22:33:24.005648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.493 [2024-04-15 22:33:24.068062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.493 22:33:24 -- accel/accel.sh@21 -- # val= 00:06:39.493 22:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # IFS=: 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # read -r var val 00:06:39.493 22:33:24 -- accel/accel.sh@21 -- # val= 00:06:39.493 22:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # IFS=: 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # read -r var val 00:06:39.493 22:33:24 -- accel/accel.sh@21 -- # val=0x1 00:06:39.493 22:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # IFS=: 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # read -r var val 00:06:39.493 22:33:24 -- accel/accel.sh@21 -- # val= 00:06:39.493 22:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # IFS=: 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # read -r var val 00:06:39.493 22:33:24 -- accel/accel.sh@21 -- # val= 00:06:39.493 22:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # IFS=: 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # read -r var val 00:06:39.493 22:33:24 -- accel/accel.sh@21 -- # val=fill 00:06:39.493 22:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.493 22:33:24 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # IFS=: 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # read -r var val 00:06:39.493 22:33:24 -- accel/accel.sh@21 -- # val=0x80 00:06:39.493 22:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # IFS=: 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # read -r var val 00:06:39.493 22:33:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:39.493 22:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # IFS=: 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # read -r var val 00:06:39.493 22:33:24 -- accel/accel.sh@21 -- # val= 00:06:39.493 22:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # IFS=: 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # read -r var val 00:06:39.493 22:33:24 -- accel/accel.sh@21 -- # val=software 00:06:39.493 22:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.493 22:33:24 -- accel/accel.sh@23 -- # accel_module=software 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # IFS=: 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # read -r var val 00:06:39.493 22:33:24 -- accel/accel.sh@21 -- # val=64 00:06:39.493 22:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # IFS=: 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # read -r var val 00:06:39.493 22:33:24 -- accel/accel.sh@21 -- # val=64 00:06:39.493 22:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # IFS=: 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # read -r var val 00:06:39.493 22:33:24 -- accel/accel.sh@21 -- # val=1 00:06:39.493 22:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # IFS=: 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # read -r var val 00:06:39.493 22:33:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:39.493 22:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # IFS=: 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # read -r var val 00:06:39.493 22:33:24 -- accel/accel.sh@21 -- # val=Yes 00:06:39.493 22:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # IFS=: 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # read -r var val 00:06:39.493 22:33:24 -- accel/accel.sh@21 -- # val= 00:06:39.493 22:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # IFS=: 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # read -r var val 00:06:39.493 22:33:24 -- accel/accel.sh@21 -- # val= 00:06:39.493 22:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # IFS=: 00:06:39.493 22:33:24 -- accel/accel.sh@20 -- # read -r var val 00:06:40.435 22:33:25 -- accel/accel.sh@21 -- # val= 00:06:40.435 22:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.435 22:33:25 -- accel/accel.sh@20 -- # IFS=: 00:06:40.435 22:33:25 -- accel/accel.sh@20 -- # read -r var val 00:06:40.435 22:33:25 -- accel/accel.sh@21 -- # val= 00:06:40.435 22:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.435 22:33:25 -- accel/accel.sh@20 -- # IFS=: 00:06:40.435 22:33:25 -- accel/accel.sh@20 -- # read -r var val 00:06:40.435 22:33:25 -- accel/accel.sh@21 -- # val= 00:06:40.435 22:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.435 22:33:25 -- accel/accel.sh@20 -- # IFS=: 00:06:40.435 22:33:25 -- accel/accel.sh@20 -- # read -r var val 00:06:40.435 22:33:25 -- accel/accel.sh@21 -- # val= 00:06:40.435 22:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.435 22:33:25 -- accel/accel.sh@20 -- # IFS=: 00:06:40.435 22:33:25 -- accel/accel.sh@20 -- # read -r var val 00:06:40.435 22:33:25 -- accel/accel.sh@21 -- # val= 00:06:40.435 22:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.435 22:33:25 -- accel/accel.sh@20 -- # IFS=: 00:06:40.435 22:33:25 -- accel/accel.sh@20 -- # read -r var val 00:06:40.435 22:33:25 -- accel/accel.sh@21 -- # val= 00:06:40.435 22:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.435 22:33:25 -- accel/accel.sh@20 -- # IFS=: 00:06:40.435 22:33:25 -- accel/accel.sh@20 -- # read -r var val 00:06:40.435 22:33:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.435 22:33:25 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:40.435 22:33:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.435 00:06:40.435 real 0m2.574s 00:06:40.435 user 0m2.368s 00:06:40.435 sys 0m0.213s 00:06:40.435 22:33:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.435 22:33:25 -- common/autotest_common.sh@10 -- # set +x 00:06:40.435 ************************************ 00:06:40.435 END TEST accel_fill 00:06:40.435 ************************************ 00:06:40.435 22:33:25 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:40.435 22:33:25 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:40.435 22:33:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.435 22:33:25 -- common/autotest_common.sh@10 -- # set +x 00:06:40.436 ************************************ 00:06:40.436 START TEST accel_copy_crc32c 00:06:40.436 ************************************ 00:06:40.436 22:33:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:40.436 22:33:25 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.436 22:33:25 -- accel/accel.sh@17 -- # local accel_module 00:06:40.696 22:33:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:40.696 22:33:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:40.696 22:33:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.696 22:33:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.696 22:33:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.696 22:33:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.696 22:33:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.696 22:33:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.696 22:33:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.696 22:33:25 -- accel/accel.sh@42 -- # jq -r . 00:06:40.696 [2024-04-15 22:33:25.268802] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:40.696 [2024-04-15 22:33:25.268874] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907969 ] 00:06:40.696 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.696 [2024-04-15 22:33:25.337027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.696 [2024-04-15 22:33:25.400275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.081 22:33:26 -- accel/accel.sh@18 -- # out=' 00:06:42.081 SPDK Configuration: 00:06:42.081 Core mask: 0x1 00:06:42.081 00:06:42.081 Accel Perf Configuration: 00:06:42.081 Workload Type: copy_crc32c 00:06:42.081 CRC-32C seed: 0 00:06:42.081 Vector size: 4096 bytes 00:06:42.081 Transfer size: 4096 bytes 00:06:42.081 Vector count 1 00:06:42.081 Module: software 00:06:42.081 Queue depth: 32 00:06:42.081 Allocate depth: 32 00:06:42.081 # threads/core: 1 00:06:42.081 Run time: 1 seconds 00:06:42.081 Verify: Yes 00:06:42.081 00:06:42.081 Running for 1 seconds... 00:06:42.081 00:06:42.081 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.081 ------------------------------------------------------------------------------------ 00:06:42.081 0,0 248032/s 968 MiB/s 0 0 00:06:42.081 ==================================================================================== 00:06:42.081 Total 248032/s 968 MiB/s 0 0' 00:06:42.081 22:33:26 -- accel/accel.sh@20 -- # IFS=: 00:06:42.081 22:33:26 -- accel/accel.sh@20 -- # read -r var val 00:06:42.081 22:33:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:42.081 22:33:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:42.081 22:33:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.081 22:33:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.081 22:33:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.081 22:33:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.081 22:33:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.081 22:33:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.081 22:33:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.081 22:33:26 -- accel/accel.sh@42 -- # jq -r . 00:06:42.081 [2024-04-15 22:33:26.553638] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:42.081 [2024-04-15 22:33:26.553713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908122 ] 00:06:42.081 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.081 [2024-04-15 22:33:26.620293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.081 [2024-04-15 22:33:26.683639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.081 22:33:26 -- accel/accel.sh@21 -- # val= 00:06:42.081 22:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.081 22:33:26 -- accel/accel.sh@20 -- # IFS=: 00:06:42.081 22:33:26 -- accel/accel.sh@20 -- # read -r var val 00:06:42.081 22:33:26 -- accel/accel.sh@21 -- # val= 00:06:42.081 22:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.081 22:33:26 -- accel/accel.sh@20 -- # IFS=: 00:06:42.081 22:33:26 -- accel/accel.sh@20 -- # read -r var val 00:06:42.081 22:33:26 -- accel/accel.sh@21 -- # val=0x1 00:06:42.081 22:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.081 22:33:26 -- accel/accel.sh@20 -- # IFS=: 00:06:42.081 22:33:26 -- accel/accel.sh@20 -- # read -r var val 00:06:42.081 22:33:26 -- accel/accel.sh@21 -- # val= 00:06:42.081 22:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.081 22:33:26 -- accel/accel.sh@20 -- # IFS=: 00:06:42.081 22:33:26 -- accel/accel.sh@20 -- # read -r var val 00:06:42.081 22:33:26 -- accel/accel.sh@21 -- # val= 00:06:42.081 22:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.081 22:33:26 -- accel/accel.sh@20 -- # IFS=: 00:06:42.081 22:33:26 -- accel/accel.sh@20 -- # read -r var val 00:06:42.081 22:33:26 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:42.081 22:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.081 22:33:26 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:42.081 22:33:26 -- accel/accel.sh@20 -- # IFS=: 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # read -r var val 00:06:42.082 22:33:26 -- accel/accel.sh@21 -- # val=0 00:06:42.082 22:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # IFS=: 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # read -r var val 00:06:42.082 22:33:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.082 22:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # IFS=: 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # read -r var val 00:06:42.082 22:33:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.082 22:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # IFS=: 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # read -r var val 00:06:42.082 22:33:26 -- accel/accel.sh@21 -- # val= 00:06:42.082 22:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # IFS=: 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # read -r var val 00:06:42.082 22:33:26 -- accel/accel.sh@21 -- # val=software 00:06:42.082 22:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.082 22:33:26 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # IFS=: 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # read -r var val 00:06:42.082 22:33:26 -- accel/accel.sh@21 -- # val=32 00:06:42.082 22:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # IFS=: 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # read -r var val 00:06:42.082 22:33:26 -- accel/accel.sh@21 -- # val=32 00:06:42.082 22:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # IFS=: 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # read -r var val 00:06:42.082 22:33:26 -- accel/accel.sh@21 -- # val=1 00:06:42.082 22:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # IFS=: 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # read -r var val 00:06:42.082 22:33:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.082 22:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # IFS=: 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # read -r var val 00:06:42.082 22:33:26 -- accel/accel.sh@21 -- # val=Yes 00:06:42.082 22:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # IFS=: 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # read -r var val 00:06:42.082 22:33:26 -- accel/accel.sh@21 -- # val= 00:06:42.082 22:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # IFS=: 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # read -r var val 00:06:42.082 22:33:26 -- accel/accel.sh@21 -- # val= 00:06:42.082 22:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # IFS=: 00:06:42.082 22:33:26 -- accel/accel.sh@20 -- # read -r var val 00:06:43.084 22:33:27 -- accel/accel.sh@21 -- # val= 00:06:43.084 22:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.084 22:33:27 -- accel/accel.sh@20 -- # IFS=: 00:06:43.084 22:33:27 -- accel/accel.sh@20 -- # read -r var val 00:06:43.084 22:33:27 -- accel/accel.sh@21 -- # val= 00:06:43.084 22:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.084 22:33:27 -- accel/accel.sh@20 -- # IFS=: 00:06:43.084 22:33:27 -- accel/accel.sh@20 -- # read -r var val 00:06:43.084 22:33:27 -- accel/accel.sh@21 -- # val= 00:06:43.084 22:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.084 22:33:27 -- accel/accel.sh@20 -- # IFS=: 00:06:43.084 22:33:27 -- accel/accel.sh@20 -- # read -r var val 00:06:43.084 22:33:27 -- accel/accel.sh@21 -- # val= 00:06:43.084 22:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.084 22:33:27 -- accel/accel.sh@20 -- # IFS=: 00:06:43.084 22:33:27 -- accel/accel.sh@20 -- # read -r var val 00:06:43.084 22:33:27 -- accel/accel.sh@21 -- # val= 00:06:43.084 22:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.084 22:33:27 -- accel/accel.sh@20 -- # IFS=: 00:06:43.084 22:33:27 -- accel/accel.sh@20 -- # read -r var val 00:06:43.084 22:33:27 -- accel/accel.sh@21 -- # val= 00:06:43.084 22:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.084 22:33:27 -- accel/accel.sh@20 -- # IFS=: 00:06:43.084 22:33:27 -- accel/accel.sh@20 -- # read -r var val 00:06:43.084 22:33:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.084 22:33:27 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:43.084 22:33:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.084 00:06:43.084 real 0m2.572s 00:06:43.084 user 0m2.379s 00:06:43.084 sys 0m0.200s 00:06:43.084 22:33:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.084 22:33:27 -- common/autotest_common.sh@10 -- # set +x 00:06:43.084 ************************************ 00:06:43.084 END TEST accel_copy_crc32c 00:06:43.084 ************************************ 00:06:43.084 22:33:27 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:43.084 22:33:27 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:43.084 22:33:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.084 22:33:27 -- common/autotest_common.sh@10 -- # set +x 00:06:43.084 ************************************ 00:06:43.084 START TEST accel_copy_crc32c_C2 00:06:43.084 ************************************ 00:06:43.084 22:33:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:43.084 22:33:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.084 22:33:27 -- accel/accel.sh@17 -- # local accel_module 00:06:43.084 22:33:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:43.084 22:33:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:43.084 22:33:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.084 22:33:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.084 22:33:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.084 22:33:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.084 22:33:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.084 22:33:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.084 22:33:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.084 22:33:27 -- accel/accel.sh@42 -- # jq -r . 00:06:43.084 [2024-04-15 22:33:27.883514] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:43.084 [2024-04-15 22:33:27.883590] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908425 ] 00:06:43.345 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.345 [2024-04-15 22:33:27.951430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.345 [2024-04-15 22:33:28.014893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.728 22:33:29 -- accel/accel.sh@18 -- # out=' 00:06:44.728 SPDK Configuration: 00:06:44.728 Core mask: 0x1 00:06:44.728 00:06:44.728 Accel Perf Configuration: 00:06:44.728 Workload Type: copy_crc32c 00:06:44.728 CRC-32C seed: 0 00:06:44.728 Vector size: 4096 bytes 00:06:44.728 Transfer size: 8192 bytes 00:06:44.728 Vector count 2 00:06:44.728 Module: software 00:06:44.728 Queue depth: 32 00:06:44.728 Allocate depth: 32 00:06:44.728 # threads/core: 1 00:06:44.728 Run time: 1 seconds 00:06:44.728 Verify: Yes 00:06:44.728 00:06:44.728 Running for 1 seconds... 00:06:44.728 00:06:44.728 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.728 ------------------------------------------------------------------------------------ 00:06:44.728 0,0 187616/s 1465 MiB/s 0 0 00:06:44.728 ==================================================================================== 00:06:44.728 Total 187616/s 732 MiB/s 0 0' 00:06:44.728 22:33:29 -- accel/accel.sh@20 -- # IFS=: 00:06:44.728 22:33:29 -- accel/accel.sh@20 -- # read -r var val 00:06:44.728 22:33:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:44.728 22:33:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:44.728 22:33:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.728 22:33:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.728 22:33:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.728 22:33:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.728 22:33:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.728 22:33:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.728 22:33:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.728 22:33:29 -- accel/accel.sh@42 -- # jq -r . 00:06:44.728 [2024-04-15 22:33:29.168193] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:44.728 [2024-04-15 22:33:29.168267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908764 ] 00:06:44.728 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.728 [2024-04-15 22:33:29.234765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.728 [2024-04-15 22:33:29.297560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.728 22:33:29 -- accel/accel.sh@21 -- # val= 00:06:44.728 22:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.728 22:33:29 -- accel/accel.sh@20 -- # IFS=: 00:06:44.728 22:33:29 -- accel/accel.sh@20 -- # read -r var val 00:06:44.728 22:33:29 -- accel/accel.sh@21 -- # val= 00:06:44.728 22:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.728 22:33:29 -- accel/accel.sh@20 -- # IFS=: 00:06:44.728 22:33:29 -- accel/accel.sh@20 -- # read -r var val 00:06:44.728 22:33:29 -- accel/accel.sh@21 -- # val=0x1 00:06:44.728 22:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.728 22:33:29 -- accel/accel.sh@20 -- # IFS=: 00:06:44.728 22:33:29 -- accel/accel.sh@20 -- # read -r var val 00:06:44.728 22:33:29 -- accel/accel.sh@21 -- # val= 00:06:44.728 22:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.728 22:33:29 -- accel/accel.sh@20 -- # IFS=: 00:06:44.728 22:33:29 -- accel/accel.sh@20 -- # read -r var val 00:06:44.728 22:33:29 -- accel/accel.sh@21 -- # val= 00:06:44.728 22:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.728 22:33:29 -- accel/accel.sh@20 -- # IFS=: 00:06:44.728 22:33:29 -- accel/accel.sh@20 -- # read -r var val 00:06:44.728 22:33:29 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:44.728 22:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.728 22:33:29 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:44.728 22:33:29 -- accel/accel.sh@20 -- # IFS=: 00:06:44.728 22:33:29 -- accel/accel.sh@20 -- # read -r var val 00:06:44.728 22:33:29 -- accel/accel.sh@21 -- # val=0 00:06:44.728 22:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.728 22:33:29 -- accel/accel.sh@20 -- # IFS=: 00:06:44.728 22:33:29 -- accel/accel.sh@20 -- # read -r var val 00:06:44.728 22:33:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:44.728 22:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.728 22:33:29 -- accel/accel.sh@20 -- # IFS=: 00:06:44.728 22:33:29 -- accel/accel.sh@20 -- # read -r var val 00:06:44.728 22:33:29 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:44.728 22:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.728 22:33:29 -- accel/accel.sh@20 -- # IFS=: 00:06:44.729 22:33:29 -- accel/accel.sh@20 -- # read -r var val 00:06:44.729 22:33:29 -- accel/accel.sh@21 -- # val= 00:06:44.729 22:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.729 22:33:29 -- accel/accel.sh@20 -- # IFS=: 00:06:44.729 22:33:29 -- accel/accel.sh@20 -- # read -r var val 00:06:44.729 22:33:29 -- accel/accel.sh@21 -- # val=software 00:06:44.729 22:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.729 22:33:29 -- accel/accel.sh@23 -- # accel_module=software 00:06:44.729 22:33:29 -- accel/accel.sh@20 -- # IFS=: 00:06:44.729 22:33:29 -- accel/accel.sh@20 -- # read -r var val 00:06:44.729 22:33:29 -- accel/accel.sh@21 -- # val=32 00:06:44.729 22:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.729 22:33:29 -- accel/accel.sh@20 -- # IFS=: 00:06:44.729 22:33:29 -- accel/accel.sh@20 -- # read -r var val 00:06:44.729 22:33:29 -- accel/accel.sh@21 -- # val=32 00:06:44.729 22:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.729 22:33:29 -- accel/accel.sh@20 -- # IFS=: 00:06:44.729 22:33:29 -- accel/accel.sh@20 -- # read -r var val 00:06:44.729 22:33:29 -- accel/accel.sh@21 -- # val=1 00:06:44.729 22:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.729 22:33:29 -- accel/accel.sh@20 -- # IFS=: 00:06:44.729 22:33:29 -- accel/accel.sh@20 -- # read -r var val 00:06:44.729 22:33:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:44.729 22:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.729 22:33:29 -- accel/accel.sh@20 -- # IFS=: 00:06:44.729 22:33:29 -- accel/accel.sh@20 -- # read -r var val 00:06:44.729 22:33:29 -- accel/accel.sh@21 -- # val=Yes 00:06:44.729 22:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.729 22:33:29 -- accel/accel.sh@20 -- # IFS=: 00:06:44.729 22:33:29 -- accel/accel.sh@20 -- # read -r var val 00:06:44.729 22:33:29 -- accel/accel.sh@21 -- # val= 00:06:44.729 22:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.729 22:33:29 -- accel/accel.sh@20 -- # IFS=: 00:06:44.729 22:33:29 -- accel/accel.sh@20 -- # read -r var val 00:06:44.729 22:33:29 -- accel/accel.sh@21 -- # val= 00:06:44.729 22:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.729 22:33:29 -- accel/accel.sh@20 -- # IFS=: 00:06:44.729 22:33:29 -- accel/accel.sh@20 -- # read -r var val 00:06:45.672 22:33:30 -- accel/accel.sh@21 -- # val= 00:06:45.672 22:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.672 22:33:30 -- accel/accel.sh@20 -- # IFS=: 00:06:45.672 22:33:30 -- accel/accel.sh@20 -- # read -r var val 00:06:45.672 22:33:30 -- accel/accel.sh@21 -- # val= 00:06:45.672 22:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.672 22:33:30 -- accel/accel.sh@20 -- # IFS=: 00:06:45.672 22:33:30 -- accel/accel.sh@20 -- # read -r var val 00:06:45.672 22:33:30 -- accel/accel.sh@21 -- # val= 00:06:45.672 22:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.672 22:33:30 -- accel/accel.sh@20 -- # IFS=: 00:06:45.672 22:33:30 -- accel/accel.sh@20 -- # read -r var val 00:06:45.672 22:33:30 -- accel/accel.sh@21 -- # val= 00:06:45.672 22:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.672 22:33:30 -- accel/accel.sh@20 -- # IFS=: 00:06:45.672 22:33:30 -- accel/accel.sh@20 -- # read -r var val 00:06:45.672 22:33:30 -- accel/accel.sh@21 -- # val= 00:06:45.672 22:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.672 22:33:30 -- accel/accel.sh@20 -- # IFS=: 00:06:45.672 22:33:30 -- accel/accel.sh@20 -- # read -r var val 00:06:45.672 22:33:30 -- accel/accel.sh@21 -- # val= 00:06:45.672 22:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.672 22:33:30 -- accel/accel.sh@20 -- # IFS=: 00:06:45.672 22:33:30 -- accel/accel.sh@20 -- # read -r var val 00:06:45.672 22:33:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:45.672 22:33:30 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:45.672 22:33:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.672 00:06:45.672 real 0m2.572s 00:06:45.672 user 0m2.359s 00:06:45.672 sys 0m0.220s 00:06:45.672 22:33:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.672 22:33:30 -- common/autotest_common.sh@10 -- # set +x 00:06:45.672 ************************************ 00:06:45.672 END TEST accel_copy_crc32c_C2 00:06:45.672 ************************************ 00:06:45.672 22:33:30 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:45.672 22:33:30 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:45.672 22:33:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.672 22:33:30 -- common/autotest_common.sh@10 -- # set +x 00:06:45.672 ************************************ 00:06:45.672 START TEST accel_dualcast 00:06:45.672 ************************************ 00:06:45.672 22:33:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:45.672 22:33:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.672 22:33:30 -- accel/accel.sh@17 -- # local accel_module 00:06:45.672 22:33:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:45.672 22:33:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:45.672 22:33:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.672 22:33:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.672 22:33:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.672 22:33:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.672 22:33:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.672 22:33:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.672 22:33:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.672 22:33:30 -- accel/accel.sh@42 -- # jq -r . 00:06:45.934 [2024-04-15 22:33:30.498671] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:45.934 [2024-04-15 22:33:30.498745] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid909115 ] 00:06:45.934 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.934 [2024-04-15 22:33:30.566611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.934 [2024-04-15 22:33:30.631951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.320 22:33:31 -- accel/accel.sh@18 -- # out=' 00:06:47.320 SPDK Configuration: 00:06:47.320 Core mask: 0x1 00:06:47.320 00:06:47.320 Accel Perf Configuration: 00:06:47.320 Workload Type: dualcast 00:06:47.320 Transfer size: 4096 bytes 00:06:47.320 Vector count 1 00:06:47.320 Module: software 00:06:47.320 Queue depth: 32 00:06:47.320 Allocate depth: 32 00:06:47.320 # threads/core: 1 00:06:47.320 Run time: 1 seconds 00:06:47.320 Verify: Yes 00:06:47.320 00:06:47.320 Running for 1 seconds... 00:06:47.320 00:06:47.320 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:47.320 ------------------------------------------------------------------------------------ 00:06:47.320 0,0 365248/s 1426 MiB/s 0 0 00:06:47.320 ==================================================================================== 00:06:47.320 Total 365248/s 1426 MiB/s 0 0' 00:06:47.320 22:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.320 22:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.320 22:33:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:47.320 22:33:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:47.320 22:33:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.320 22:33:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.320 22:33:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.320 22:33:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.320 22:33:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.320 22:33:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.320 22:33:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.320 22:33:31 -- accel/accel.sh@42 -- # jq -r . 00:06:47.320 [2024-04-15 22:33:31.785358] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:47.320 [2024-04-15 22:33:31.785457] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid909305 ] 00:06:47.320 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.320 [2024-04-15 22:33:31.853308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.320 [2024-04-15 22:33:31.916773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.320 22:33:31 -- accel/accel.sh@21 -- # val= 00:06:47.320 22:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.320 22:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.320 22:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.320 22:33:31 -- accel/accel.sh@21 -- # val= 00:06:47.320 22:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.320 22:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.320 22:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.320 22:33:31 -- accel/accel.sh@21 -- # val=0x1 00:06:47.320 22:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.320 22:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.320 22:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.320 22:33:31 -- accel/accel.sh@21 -- # val= 00:06:47.320 22:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.320 22:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.320 22:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.321 22:33:31 -- accel/accel.sh@21 -- # val= 00:06:47.321 22:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.321 22:33:31 -- accel/accel.sh@21 -- # val=dualcast 00:06:47.321 22:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.321 22:33:31 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.321 22:33:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.321 22:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.321 22:33:31 -- accel/accel.sh@21 -- # val= 00:06:47.321 22:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.321 22:33:31 -- accel/accel.sh@21 -- # val=software 00:06:47.321 22:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.321 22:33:31 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.321 22:33:31 -- accel/accel.sh@21 -- # val=32 00:06:47.321 22:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.321 22:33:31 -- accel/accel.sh@21 -- # val=32 00:06:47.321 22:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.321 22:33:31 -- accel/accel.sh@21 -- # val=1 00:06:47.321 22:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.321 22:33:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.321 22:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.321 22:33:31 -- accel/accel.sh@21 -- # val=Yes 00:06:47.321 22:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.321 22:33:31 -- accel/accel.sh@21 -- # val= 00:06:47.321 22:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:47.321 22:33:31 -- accel/accel.sh@21 -- # val= 00:06:47.321 22:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:47.321 22:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:48.266 22:33:33 -- accel/accel.sh@21 -- # val= 00:06:48.266 22:33:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.266 22:33:33 -- accel/accel.sh@20 -- # IFS=: 00:06:48.266 22:33:33 -- accel/accel.sh@20 -- # read -r var val 00:06:48.266 22:33:33 -- accel/accel.sh@21 -- # val= 00:06:48.266 22:33:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.266 22:33:33 -- accel/accel.sh@20 -- # IFS=: 00:06:48.266 22:33:33 -- accel/accel.sh@20 -- # read -r var val 00:06:48.266 22:33:33 -- accel/accel.sh@21 -- # val= 00:06:48.267 22:33:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.267 22:33:33 -- accel/accel.sh@20 -- # IFS=: 00:06:48.267 22:33:33 -- accel/accel.sh@20 -- # read -r var val 00:06:48.267 22:33:33 -- accel/accel.sh@21 -- # val= 00:06:48.267 22:33:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.267 22:33:33 -- accel/accel.sh@20 -- # IFS=: 00:06:48.267 22:33:33 -- accel/accel.sh@20 -- # read -r var val 00:06:48.267 22:33:33 -- accel/accel.sh@21 -- # val= 00:06:48.267 22:33:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.267 22:33:33 -- accel/accel.sh@20 -- # IFS=: 00:06:48.267 22:33:33 -- accel/accel.sh@20 -- # read -r var val 00:06:48.267 22:33:33 -- accel/accel.sh@21 -- # val= 00:06:48.267 22:33:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.267 22:33:33 -- accel/accel.sh@20 -- # IFS=: 00:06:48.267 22:33:33 -- accel/accel.sh@20 -- # read -r var val 00:06:48.267 22:33:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.267 22:33:33 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:48.267 22:33:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.267 00:06:48.267 real 0m2.577s 00:06:48.267 user 0m2.369s 00:06:48.267 sys 0m0.214s 00:06:48.267 22:33:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.267 22:33:33 -- common/autotest_common.sh@10 -- # set +x 00:06:48.267 ************************************ 00:06:48.267 END TEST accel_dualcast 00:06:48.267 ************************************ 00:06:48.529 22:33:33 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:48.529 22:33:33 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:48.529 22:33:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.529 22:33:33 -- common/autotest_common.sh@10 -- # set +x 00:06:48.529 ************************************ 00:06:48.529 START TEST accel_compare 00:06:48.529 ************************************ 00:06:48.529 22:33:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:48.529 22:33:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.529 22:33:33 -- accel/accel.sh@17 -- # local accel_module 00:06:48.529 22:33:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:48.529 22:33:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:48.529 22:33:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.529 22:33:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.529 22:33:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.529 22:33:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.529 22:33:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.529 22:33:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.529 22:33:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.529 22:33:33 -- accel/accel.sh@42 -- # jq -r . 00:06:48.529 [2024-04-15 22:33:33.099081] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:48.529 [2024-04-15 22:33:33.099125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid909510 ] 00:06:48.529 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.529 [2024-04-15 22:33:33.155938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.529 [2024-04-15 22:33:33.219569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.915 22:33:34 -- accel/accel.sh@18 -- # out=' 00:06:49.915 SPDK Configuration: 00:06:49.915 Core mask: 0x1 00:06:49.915 00:06:49.915 Accel Perf Configuration: 00:06:49.915 Workload Type: compare 00:06:49.915 Transfer size: 4096 bytes 00:06:49.915 Vector count 1 00:06:49.915 Module: software 00:06:49.915 Queue depth: 32 00:06:49.915 Allocate depth: 32 00:06:49.915 # threads/core: 1 00:06:49.915 Run time: 1 seconds 00:06:49.915 Verify: Yes 00:06:49.915 00:06:49.915 Running for 1 seconds... 00:06:49.915 00:06:49.915 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:49.915 ------------------------------------------------------------------------------------ 00:06:49.915 0,0 436832/s 1706 MiB/s 0 0 00:06:49.915 ==================================================================================== 00:06:49.915 Total 436832/s 1706 MiB/s 0 0' 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:49.915 22:33:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:49.915 22:33:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:49.915 22:33:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.915 22:33:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.915 22:33:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.915 22:33:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.915 22:33:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.915 22:33:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.915 22:33:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.915 22:33:34 -- accel/accel.sh@42 -- # jq -r . 00:06:49.915 [2024-04-15 22:33:34.372965] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:49.915 [2024-04-15 22:33:34.373069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid909826 ] 00:06:49.915 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.915 [2024-04-15 22:33:34.440953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.915 [2024-04-15 22:33:34.507333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.915 22:33:34 -- accel/accel.sh@21 -- # val= 00:06:49.915 22:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:49.915 22:33:34 -- accel/accel.sh@21 -- # val= 00:06:49.915 22:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:49.915 22:33:34 -- accel/accel.sh@21 -- # val=0x1 00:06:49.915 22:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:49.915 22:33:34 -- accel/accel.sh@21 -- # val= 00:06:49.915 22:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:49.915 22:33:34 -- accel/accel.sh@21 -- # val= 00:06:49.915 22:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:49.915 22:33:34 -- accel/accel.sh@21 -- # val=compare 00:06:49.915 22:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.915 22:33:34 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:49.915 22:33:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.915 22:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:49.915 22:33:34 -- accel/accel.sh@21 -- # val= 00:06:49.915 22:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:49.915 22:33:34 -- accel/accel.sh@21 -- # val=software 00:06:49.915 22:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.915 22:33:34 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:49.915 22:33:34 -- accel/accel.sh@21 -- # val=32 00:06:49.915 22:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:49.915 22:33:34 -- accel/accel.sh@21 -- # val=32 00:06:49.915 22:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:49.915 22:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:49.915 22:33:34 -- accel/accel.sh@21 -- # val=1 00:06:49.915 22:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.916 22:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:49.916 22:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:49.916 22:33:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.916 22:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.916 22:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:49.916 22:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:49.916 22:33:34 -- accel/accel.sh@21 -- # val=Yes 00:06:49.916 22:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.916 22:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:49.916 22:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:49.916 22:33:34 -- accel/accel.sh@21 -- # val= 00:06:49.916 22:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.916 22:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:49.916 22:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:49.916 22:33:34 -- accel/accel.sh@21 -- # val= 00:06:49.916 22:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.916 22:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:49.916 22:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:50.858 22:33:35 -- accel/accel.sh@21 -- # val= 00:06:50.858 22:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.858 22:33:35 -- accel/accel.sh@20 -- # IFS=: 00:06:50.858 22:33:35 -- accel/accel.sh@20 -- # read -r var val 00:06:50.858 22:33:35 -- accel/accel.sh@21 -- # val= 00:06:50.858 22:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.858 22:33:35 -- accel/accel.sh@20 -- # IFS=: 00:06:50.858 22:33:35 -- accel/accel.sh@20 -- # read -r var val 00:06:50.858 22:33:35 -- accel/accel.sh@21 -- # val= 00:06:50.858 22:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.858 22:33:35 -- accel/accel.sh@20 -- # IFS=: 00:06:50.858 22:33:35 -- accel/accel.sh@20 -- # read -r var val 00:06:50.858 22:33:35 -- accel/accel.sh@21 -- # val= 00:06:50.858 22:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.858 22:33:35 -- accel/accel.sh@20 -- # IFS=: 00:06:50.858 22:33:35 -- accel/accel.sh@20 -- # read -r var val 00:06:50.858 22:33:35 -- accel/accel.sh@21 -- # val= 00:06:50.858 22:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.858 22:33:35 -- accel/accel.sh@20 -- # IFS=: 00:06:50.858 22:33:35 -- accel/accel.sh@20 -- # read -r var val 00:06:50.858 22:33:35 -- accel/accel.sh@21 -- # val= 00:06:50.858 22:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.858 22:33:35 -- accel/accel.sh@20 -- # IFS=: 00:06:50.858 22:33:35 -- accel/accel.sh@20 -- # read -r var val 00:06:50.858 22:33:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.858 22:33:35 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:50.858 22:33:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.858 00:06:50.858 real 0m2.548s 00:06:50.858 user 0m2.360s 00:06:50.858 sys 0m0.194s 00:06:50.858 22:33:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.858 22:33:35 -- common/autotest_common.sh@10 -- # set +x 00:06:50.858 ************************************ 00:06:50.858 END TEST accel_compare 00:06:50.858 ************************************ 00:06:51.119 22:33:35 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:51.119 22:33:35 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:51.119 22:33:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.119 22:33:35 -- common/autotest_common.sh@10 -- # set +x 00:06:51.119 ************************************ 00:06:51.119 START TEST accel_xor 00:06:51.119 ************************************ 00:06:51.119 22:33:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:51.119 22:33:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.119 22:33:35 -- accel/accel.sh@17 -- # local accel_module 00:06:51.119 22:33:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:51.119 22:33:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:51.119 22:33:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.119 22:33:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.119 22:33:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.119 22:33:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.119 22:33:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.119 22:33:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.119 22:33:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.119 22:33:35 -- accel/accel.sh@42 -- # jq -r . 00:06:51.119 [2024-04-15 22:33:35.708789] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:51.119 [2024-04-15 22:33:35.708882] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910177 ] 00:06:51.119 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.119 [2024-04-15 22:33:35.786025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.119 [2024-04-15 22:33:35.851858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.503 22:33:36 -- accel/accel.sh@18 -- # out=' 00:06:52.503 SPDK Configuration: 00:06:52.503 Core mask: 0x1 00:06:52.503 00:06:52.503 Accel Perf Configuration: 00:06:52.503 Workload Type: xor 00:06:52.503 Source buffers: 2 00:06:52.503 Transfer size: 4096 bytes 00:06:52.503 Vector count 1 00:06:52.503 Module: software 00:06:52.503 Queue depth: 32 00:06:52.503 Allocate depth: 32 00:06:52.503 # threads/core: 1 00:06:52.503 Run time: 1 seconds 00:06:52.503 Verify: Yes 00:06:52.503 00:06:52.503 Running for 1 seconds... 00:06:52.503 00:06:52.503 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.503 ------------------------------------------------------------------------------------ 00:06:52.503 0,0 361536/s 1412 MiB/s 0 0 00:06:52.503 ==================================================================================== 00:06:52.503 Total 361536/s 1412 MiB/s 0 0' 00:06:52.503 22:33:36 -- accel/accel.sh@20 -- # IFS=: 00:06:52.503 22:33:36 -- accel/accel.sh@20 -- # read -r var val 00:06:52.503 22:33:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:52.503 22:33:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:52.503 22:33:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.503 22:33:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.503 22:33:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.503 22:33:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.503 22:33:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.503 22:33:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.503 22:33:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.503 22:33:36 -- accel/accel.sh@42 -- # jq -r . 00:06:52.503 [2024-04-15 22:33:37.003054] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:52.503 [2024-04-15 22:33:37.003138] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910499 ] 00:06:52.503 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.503 [2024-04-15 22:33:37.070467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.503 [2024-04-15 22:33:37.134513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.503 22:33:37 -- accel/accel.sh@21 -- # val= 00:06:52.503 22:33:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.503 22:33:37 -- accel/accel.sh@20 -- # IFS=: 00:06:52.503 22:33:37 -- accel/accel.sh@20 -- # read -r var val 00:06:52.503 22:33:37 -- accel/accel.sh@21 -- # val= 00:06:52.503 22:33:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.503 22:33:37 -- accel/accel.sh@20 -- # IFS=: 00:06:52.503 22:33:37 -- accel/accel.sh@20 -- # read -r var val 00:06:52.503 22:33:37 -- accel/accel.sh@21 -- # val=0x1 00:06:52.503 22:33:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.503 22:33:37 -- accel/accel.sh@20 -- # IFS=: 00:06:52.503 22:33:37 -- accel/accel.sh@20 -- # read -r var val 00:06:52.503 22:33:37 -- accel/accel.sh@21 -- # val= 00:06:52.503 22:33:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.503 22:33:37 -- accel/accel.sh@20 -- # IFS=: 00:06:52.503 22:33:37 -- accel/accel.sh@20 -- # read -r var val 00:06:52.503 22:33:37 -- accel/accel.sh@21 -- # val= 00:06:52.503 22:33:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.503 22:33:37 -- accel/accel.sh@20 -- # IFS=: 00:06:52.503 22:33:37 -- accel/accel.sh@20 -- # read -r var val 00:06:52.503 22:33:37 -- accel/accel.sh@21 -- # val=xor 00:06:52.503 22:33:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.503 22:33:37 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:52.503 22:33:37 -- accel/accel.sh@20 -- # IFS=: 00:06:52.503 22:33:37 -- accel/accel.sh@20 -- # read -r var val 00:06:52.503 22:33:37 -- accel/accel.sh@21 -- # val=2 00:06:52.503 22:33:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.503 22:33:37 -- accel/accel.sh@20 -- # IFS=: 00:06:52.503 22:33:37 -- accel/accel.sh@20 -- # read -r var val 00:06:52.503 22:33:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.503 22:33:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.503 22:33:37 -- accel/accel.sh@20 -- # IFS=: 00:06:52.503 22:33:37 -- accel/accel.sh@20 -- # read -r var val 00:06:52.503 22:33:37 -- accel/accel.sh@21 -- # val= 00:06:52.503 22:33:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.503 22:33:37 -- accel/accel.sh@20 -- # IFS=: 00:06:52.503 22:33:37 -- accel/accel.sh@20 -- # read -r var val 00:06:52.503 22:33:37 -- accel/accel.sh@21 -- # val=software 00:06:52.503 22:33:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.503 22:33:37 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.504 22:33:37 -- accel/accel.sh@20 -- # IFS=: 00:06:52.504 22:33:37 -- accel/accel.sh@20 -- # read -r var val 00:06:52.504 22:33:37 -- accel/accel.sh@21 -- # val=32 00:06:52.504 22:33:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.504 22:33:37 -- accel/accel.sh@20 -- # IFS=: 00:06:52.504 22:33:37 -- accel/accel.sh@20 -- # read -r var val 00:06:52.504 22:33:37 -- accel/accel.sh@21 -- # val=32 00:06:52.504 22:33:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.504 22:33:37 -- accel/accel.sh@20 -- # IFS=: 00:06:52.504 22:33:37 -- accel/accel.sh@20 -- # read -r var val 00:06:52.504 22:33:37 -- accel/accel.sh@21 -- # val=1 00:06:52.504 22:33:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.504 22:33:37 -- accel/accel.sh@20 -- # IFS=: 00:06:52.504 22:33:37 -- accel/accel.sh@20 -- # read -r var val 00:06:52.504 22:33:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.504 22:33:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.504 22:33:37 -- accel/accel.sh@20 -- # IFS=: 00:06:52.504 22:33:37 -- accel/accel.sh@20 -- # read -r var val 00:06:52.504 22:33:37 -- accel/accel.sh@21 -- # val=Yes 00:06:52.504 22:33:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.504 22:33:37 -- accel/accel.sh@20 -- # IFS=: 00:06:52.504 22:33:37 -- accel/accel.sh@20 -- # read -r var val 00:06:52.504 22:33:37 -- accel/accel.sh@21 -- # val= 00:06:52.504 22:33:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.504 22:33:37 -- accel/accel.sh@20 -- # IFS=: 00:06:52.504 22:33:37 -- accel/accel.sh@20 -- # read -r var val 00:06:52.504 22:33:37 -- accel/accel.sh@21 -- # val= 00:06:52.504 22:33:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.504 22:33:37 -- accel/accel.sh@20 -- # IFS=: 00:06:52.504 22:33:37 -- accel/accel.sh@20 -- # read -r var val 00:06:53.888 22:33:38 -- accel/accel.sh@21 -- # val= 00:06:53.888 22:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.888 22:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:53.888 22:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:53.888 22:33:38 -- accel/accel.sh@21 -- # val= 00:06:53.888 22:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.888 22:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:53.888 22:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:53.888 22:33:38 -- accel/accel.sh@21 -- # val= 00:06:53.888 22:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.888 22:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:53.888 22:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:53.888 22:33:38 -- accel/accel.sh@21 -- # val= 00:06:53.888 22:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.888 22:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:53.888 22:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:53.888 22:33:38 -- accel/accel.sh@21 -- # val= 00:06:53.888 22:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.888 22:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:53.888 22:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:53.888 22:33:38 -- accel/accel.sh@21 -- # val= 00:06:53.888 22:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.888 22:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:53.888 22:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:53.888 22:33:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.888 22:33:38 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:53.888 22:33:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.888 00:06:53.888 real 0m2.583s 00:06:53.888 user 0m2.384s 00:06:53.888 sys 0m0.206s 00:06:53.888 22:33:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.888 22:33:38 -- common/autotest_common.sh@10 -- # set +x 00:06:53.888 ************************************ 00:06:53.888 END TEST accel_xor 00:06:53.888 ************************************ 00:06:53.888 22:33:38 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:53.888 22:33:38 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:53.888 22:33:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.888 22:33:38 -- common/autotest_common.sh@10 -- # set +x 00:06:53.888 ************************************ 00:06:53.888 START TEST accel_xor 00:06:53.888 ************************************ 00:06:53.888 22:33:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:06:53.888 22:33:38 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.888 22:33:38 -- accel/accel.sh@17 -- # local accel_module 00:06:53.888 22:33:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:53.888 22:33:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:53.888 22:33:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.888 22:33:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.888 22:33:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.888 22:33:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.888 22:33:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.888 22:33:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.888 22:33:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.888 22:33:38 -- accel/accel.sh@42 -- # jq -r . 00:06:53.888 [2024-04-15 22:33:38.333645] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:53.888 [2024-04-15 22:33:38.333729] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910689 ] 00:06:53.888 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.888 [2024-04-15 22:33:38.403042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.888 [2024-04-15 22:33:38.468940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.829 22:33:39 -- accel/accel.sh@18 -- # out=' 00:06:54.829 SPDK Configuration: 00:06:54.829 Core mask: 0x1 00:06:54.829 00:06:54.829 Accel Perf Configuration: 00:06:54.829 Workload Type: xor 00:06:54.829 Source buffers: 3 00:06:54.829 Transfer size: 4096 bytes 00:06:54.829 Vector count 1 00:06:54.829 Module: software 00:06:54.829 Queue depth: 32 00:06:54.829 Allocate depth: 32 00:06:54.829 # threads/core: 1 00:06:54.829 Run time: 1 seconds 00:06:54.829 Verify: Yes 00:06:54.829 00:06:54.829 Running for 1 seconds... 00:06:54.829 00:06:54.829 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.829 ------------------------------------------------------------------------------------ 00:06:54.829 0,0 344096/s 1344 MiB/s 0 0 00:06:54.829 ==================================================================================== 00:06:54.829 Total 344096/s 1344 MiB/s 0 0' 00:06:54.829 22:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:54.830 22:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:54.830 22:33:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:54.830 22:33:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:54.830 22:33:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.830 22:33:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.830 22:33:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.830 22:33:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.830 22:33:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.830 22:33:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.830 22:33:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.830 22:33:39 -- accel/accel.sh@42 -- # jq -r . 00:06:54.830 [2024-04-15 22:33:39.622219] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:54.830 [2024-04-15 22:33:39.622313] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910884 ] 00:06:55.090 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.090 [2024-04-15 22:33:39.690408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.090 [2024-04-15 22:33:39.754095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.091 22:33:39 -- accel/accel.sh@21 -- # val= 00:06:55.091 22:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.091 22:33:39 -- accel/accel.sh@21 -- # val= 00:06:55.091 22:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.091 22:33:39 -- accel/accel.sh@21 -- # val=0x1 00:06:55.091 22:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.091 22:33:39 -- accel/accel.sh@21 -- # val= 00:06:55.091 22:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.091 22:33:39 -- accel/accel.sh@21 -- # val= 00:06:55.091 22:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.091 22:33:39 -- accel/accel.sh@21 -- # val=xor 00:06:55.091 22:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.091 22:33:39 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.091 22:33:39 -- accel/accel.sh@21 -- # val=3 00:06:55.091 22:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.091 22:33:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.091 22:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.091 22:33:39 -- accel/accel.sh@21 -- # val= 00:06:55.091 22:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.091 22:33:39 -- accel/accel.sh@21 -- # val=software 00:06:55.091 22:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.091 22:33:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.091 22:33:39 -- accel/accel.sh@21 -- # val=32 00:06:55.091 22:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.091 22:33:39 -- accel/accel.sh@21 -- # val=32 00:06:55.091 22:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.091 22:33:39 -- accel/accel.sh@21 -- # val=1 00:06:55.091 22:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.091 22:33:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.091 22:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.091 22:33:39 -- accel/accel.sh@21 -- # val=Yes 00:06:55.091 22:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.091 22:33:39 -- accel/accel.sh@21 -- # val= 00:06:55.091 22:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:55.091 22:33:39 -- accel/accel.sh@21 -- # val= 00:06:55.091 22:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:55.091 22:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:56.473 22:33:40 -- accel/accel.sh@21 -- # val= 00:06:56.473 22:33:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.473 22:33:40 -- accel/accel.sh@20 -- # IFS=: 00:06:56.473 22:33:40 -- accel/accel.sh@20 -- # read -r var val 00:06:56.473 22:33:40 -- accel/accel.sh@21 -- # val= 00:06:56.473 22:33:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.473 22:33:40 -- accel/accel.sh@20 -- # IFS=: 00:06:56.473 22:33:40 -- accel/accel.sh@20 -- # read -r var val 00:06:56.473 22:33:40 -- accel/accel.sh@21 -- # val= 00:06:56.473 22:33:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.473 22:33:40 -- accel/accel.sh@20 -- # IFS=: 00:06:56.473 22:33:40 -- accel/accel.sh@20 -- # read -r var val 00:06:56.473 22:33:40 -- accel/accel.sh@21 -- # val= 00:06:56.473 22:33:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.473 22:33:40 -- accel/accel.sh@20 -- # IFS=: 00:06:56.473 22:33:40 -- accel/accel.sh@20 -- # read -r var val 00:06:56.473 22:33:40 -- accel/accel.sh@21 -- # val= 00:06:56.473 22:33:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.473 22:33:40 -- accel/accel.sh@20 -- # IFS=: 00:06:56.473 22:33:40 -- accel/accel.sh@20 -- # read -r var val 00:06:56.473 22:33:40 -- accel/accel.sh@21 -- # val= 00:06:56.473 22:33:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.473 22:33:40 -- accel/accel.sh@20 -- # IFS=: 00:06:56.473 22:33:40 -- accel/accel.sh@20 -- # read -r var val 00:06:56.473 22:33:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.473 22:33:40 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:56.473 22:33:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.473 00:06:56.473 real 0m2.577s 00:06:56.473 user 0m2.361s 00:06:56.473 sys 0m0.222s 00:06:56.473 22:33:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.473 22:33:40 -- common/autotest_common.sh@10 -- # set +x 00:06:56.473 ************************************ 00:06:56.473 END TEST accel_xor 00:06:56.473 ************************************ 00:06:56.473 22:33:40 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:56.473 22:33:40 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:56.473 22:33:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.473 22:33:40 -- common/autotest_common.sh@10 -- # set +x 00:06:56.473 ************************************ 00:06:56.473 START TEST accel_dif_verify 00:06:56.473 ************************************ 00:06:56.473 22:33:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:06:56.473 22:33:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.473 22:33:40 -- accel/accel.sh@17 -- # local accel_module 00:06:56.473 22:33:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:56.473 22:33:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:56.473 22:33:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.473 22:33:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.473 22:33:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.473 22:33:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.473 22:33:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.473 22:33:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.473 22:33:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.473 22:33:40 -- accel/accel.sh@42 -- # jq -r . 00:06:56.473 [2024-04-15 22:33:40.954102] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:56.473 [2024-04-15 22:33:40.954176] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911233 ] 00:06:56.473 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.473 [2024-04-15 22:33:41.022310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.473 [2024-04-15 22:33:41.087995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.415 22:33:42 -- accel/accel.sh@18 -- # out=' 00:06:57.415 SPDK Configuration: 00:06:57.415 Core mask: 0x1 00:06:57.415 00:06:57.415 Accel Perf Configuration: 00:06:57.415 Workload Type: dif_verify 00:06:57.415 Vector size: 4096 bytes 00:06:57.415 Transfer size: 4096 bytes 00:06:57.415 Block size: 512 bytes 00:06:57.415 Metadata size: 8 bytes 00:06:57.415 Vector count 1 00:06:57.415 Module: software 00:06:57.415 Queue depth: 32 00:06:57.415 Allocate depth: 32 00:06:57.415 # threads/core: 1 00:06:57.415 Run time: 1 seconds 00:06:57.415 Verify: No 00:06:57.415 00:06:57.415 Running for 1 seconds... 00:06:57.415 00:06:57.415 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.415 ------------------------------------------------------------------------------------ 00:06:57.415 0,0 94752/s 375 MiB/s 0 0 00:06:57.415 ==================================================================================== 00:06:57.415 Total 94752/s 370 MiB/s 0 0' 00:06:57.415 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.415 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.415 22:33:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:57.415 22:33:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:57.415 22:33:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.415 22:33:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.415 22:33:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.415 22:33:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.415 22:33:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.415 22:33:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.415 22:33:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.415 22:33:42 -- accel/accel.sh@42 -- # jq -r . 00:06:57.677 [2024-04-15 22:33:42.242995] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:57.677 [2024-04-15 22:33:42.243096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911575 ] 00:06:57.677 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.677 [2024-04-15 22:33:42.311394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.677 [2024-04-15 22:33:42.373777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.677 22:33:42 -- accel/accel.sh@21 -- # val= 00:06:57.677 22:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.677 22:33:42 -- accel/accel.sh@21 -- # val= 00:06:57.677 22:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.677 22:33:42 -- accel/accel.sh@21 -- # val=0x1 00:06:57.677 22:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.677 22:33:42 -- accel/accel.sh@21 -- # val= 00:06:57.677 22:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.677 22:33:42 -- accel/accel.sh@21 -- # val= 00:06:57.677 22:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.677 22:33:42 -- accel/accel.sh@21 -- # val=dif_verify 00:06:57.677 22:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.677 22:33:42 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.677 22:33:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.677 22:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.677 22:33:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.677 22:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.677 22:33:42 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:57.677 22:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.677 22:33:42 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:57.677 22:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.677 22:33:42 -- accel/accel.sh@21 -- # val= 00:06:57.677 22:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.677 22:33:42 -- accel/accel.sh@21 -- # val=software 00:06:57.677 22:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.677 22:33:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.677 22:33:42 -- accel/accel.sh@21 -- # val=32 00:06:57.677 22:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.677 22:33:42 -- accel/accel.sh@21 -- # val=32 00:06:57.677 22:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.677 22:33:42 -- accel/accel.sh@21 -- # val=1 00:06:57.677 22:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.677 22:33:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.677 22:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.677 22:33:42 -- accel/accel.sh@21 -- # val=No 00:06:57.677 22:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.677 22:33:42 -- accel/accel.sh@21 -- # val= 00:06:57.677 22:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:57.677 22:33:42 -- accel/accel.sh@21 -- # val= 00:06:57.677 22:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:57.677 22:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:59.059 22:33:43 -- accel/accel.sh@21 -- # val= 00:06:59.059 22:33:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.059 22:33:43 -- accel/accel.sh@20 -- # IFS=: 00:06:59.059 22:33:43 -- accel/accel.sh@20 -- # read -r var val 00:06:59.059 22:33:43 -- accel/accel.sh@21 -- # val= 00:06:59.059 22:33:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.059 22:33:43 -- accel/accel.sh@20 -- # IFS=: 00:06:59.059 22:33:43 -- accel/accel.sh@20 -- # read -r var val 00:06:59.059 22:33:43 -- accel/accel.sh@21 -- # val= 00:06:59.059 22:33:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.059 22:33:43 -- accel/accel.sh@20 -- # IFS=: 00:06:59.059 22:33:43 -- accel/accel.sh@20 -- # read -r var val 00:06:59.059 22:33:43 -- accel/accel.sh@21 -- # val= 00:06:59.059 22:33:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.059 22:33:43 -- accel/accel.sh@20 -- # IFS=: 00:06:59.059 22:33:43 -- accel/accel.sh@20 -- # read -r var val 00:06:59.059 22:33:43 -- accel/accel.sh@21 -- # val= 00:06:59.059 22:33:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.059 22:33:43 -- accel/accel.sh@20 -- # IFS=: 00:06:59.059 22:33:43 -- accel/accel.sh@20 -- # read -r var val 00:06:59.059 22:33:43 -- accel/accel.sh@21 -- # val= 00:06:59.059 22:33:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.059 22:33:43 -- accel/accel.sh@20 -- # IFS=: 00:06:59.059 22:33:43 -- accel/accel.sh@20 -- # read -r var val 00:06:59.059 22:33:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.059 22:33:43 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:59.059 22:33:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.059 00:06:59.059 real 0m2.577s 00:06:59.059 user 0m2.374s 00:06:59.059 sys 0m0.211s 00:06:59.059 22:33:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.059 22:33:43 -- common/autotest_common.sh@10 -- # set +x 00:06:59.059 ************************************ 00:06:59.059 END TEST accel_dif_verify 00:06:59.059 ************************************ 00:06:59.059 22:33:43 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:59.059 22:33:43 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:59.059 22:33:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.059 22:33:43 -- common/autotest_common.sh@10 -- # set +x 00:06:59.059 ************************************ 00:06:59.059 START TEST accel_dif_generate 00:06:59.059 ************************************ 00:06:59.059 22:33:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:06:59.059 22:33:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.059 22:33:43 -- accel/accel.sh@17 -- # local accel_module 00:06:59.059 22:33:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:59.059 22:33:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:59.059 22:33:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.059 22:33:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.059 22:33:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.059 22:33:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.059 22:33:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.059 22:33:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.059 22:33:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.059 22:33:43 -- accel/accel.sh@42 -- # jq -r . 00:06:59.059 [2024-04-15 22:33:43.575368] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:59.059 [2024-04-15 22:33:43.575441] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911856 ] 00:06:59.059 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.059 [2024-04-15 22:33:43.643724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.059 [2024-04-15 22:33:43.708874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.442 22:33:44 -- accel/accel.sh@18 -- # out=' 00:07:00.442 SPDK Configuration: 00:07:00.442 Core mask: 0x1 00:07:00.442 00:07:00.442 Accel Perf Configuration: 00:07:00.442 Workload Type: dif_generate 00:07:00.442 Vector size: 4096 bytes 00:07:00.442 Transfer size: 4096 bytes 00:07:00.442 Block size: 512 bytes 00:07:00.442 Metadata size: 8 bytes 00:07:00.442 Vector count 1 00:07:00.442 Module: software 00:07:00.442 Queue depth: 32 00:07:00.442 Allocate depth: 32 00:07:00.442 # threads/core: 1 00:07:00.442 Run time: 1 seconds 00:07:00.442 Verify: No 00:07:00.442 00:07:00.442 Running for 1 seconds... 00:07:00.442 00:07:00.442 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.442 ------------------------------------------------------------------------------------ 00:07:00.442 0,0 114784/s 455 MiB/s 0 0 00:07:00.442 ==================================================================================== 00:07:00.442 Total 114784/s 448 MiB/s 0 0' 00:07:00.442 22:33:44 -- accel/accel.sh@20 -- # IFS=: 00:07:00.442 22:33:44 -- accel/accel.sh@20 -- # read -r var val 00:07:00.442 22:33:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:00.442 22:33:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:00.442 22:33:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.442 22:33:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.442 22:33:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.442 22:33:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.442 22:33:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.442 22:33:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.442 22:33:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.442 22:33:44 -- accel/accel.sh@42 -- # jq -r . 00:07:00.442 [2024-04-15 22:33:44.861357] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:00.442 [2024-04-15 22:33:44.861433] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911999 ] 00:07:00.442 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.442 [2024-04-15 22:33:44.928670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.442 [2024-04-15 22:33:44.991728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.442 22:33:45 -- accel/accel.sh@21 -- # val= 00:07:00.442 22:33:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.442 22:33:45 -- accel/accel.sh@20 -- # IFS=: 00:07:00.442 22:33:45 -- accel/accel.sh@20 -- # read -r var val 00:07:00.442 22:33:45 -- accel/accel.sh@21 -- # val= 00:07:00.442 22:33:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.442 22:33:45 -- accel/accel.sh@20 -- # IFS=: 00:07:00.442 22:33:45 -- accel/accel.sh@20 -- # read -r var val 00:07:00.442 22:33:45 -- accel/accel.sh@21 -- # val=0x1 00:07:00.442 22:33:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.442 22:33:45 -- accel/accel.sh@20 -- # IFS=: 00:07:00.442 22:33:45 -- accel/accel.sh@20 -- # read -r var val 00:07:00.442 22:33:45 -- accel/accel.sh@21 -- # val= 00:07:00.442 22:33:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.442 22:33:45 -- accel/accel.sh@20 -- # IFS=: 00:07:00.442 22:33:45 -- accel/accel.sh@20 -- # read -r var val 00:07:00.442 22:33:45 -- accel/accel.sh@21 -- # val= 00:07:00.442 22:33:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.442 22:33:45 -- accel/accel.sh@20 -- # IFS=: 00:07:00.442 22:33:45 -- accel/accel.sh@20 -- # read -r var val 00:07:00.443 22:33:45 -- accel/accel.sh@21 -- # val=dif_generate 00:07:00.443 22:33:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.443 22:33:45 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # IFS=: 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # read -r var val 00:07:00.443 22:33:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.443 22:33:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # IFS=: 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # read -r var val 00:07:00.443 22:33:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.443 22:33:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # IFS=: 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # read -r var val 00:07:00.443 22:33:45 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:00.443 22:33:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # IFS=: 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # read -r var val 00:07:00.443 22:33:45 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:00.443 22:33:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # IFS=: 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # read -r var val 00:07:00.443 22:33:45 -- accel/accel.sh@21 -- # val= 00:07:00.443 22:33:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # IFS=: 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # read -r var val 00:07:00.443 22:33:45 -- accel/accel.sh@21 -- # val=software 00:07:00.443 22:33:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.443 22:33:45 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # IFS=: 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # read -r var val 00:07:00.443 22:33:45 -- accel/accel.sh@21 -- # val=32 00:07:00.443 22:33:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # IFS=: 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # read -r var val 00:07:00.443 22:33:45 -- accel/accel.sh@21 -- # val=32 00:07:00.443 22:33:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # IFS=: 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # read -r var val 00:07:00.443 22:33:45 -- accel/accel.sh@21 -- # val=1 00:07:00.443 22:33:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # IFS=: 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # read -r var val 00:07:00.443 22:33:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.443 22:33:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # IFS=: 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # read -r var val 00:07:00.443 22:33:45 -- accel/accel.sh@21 -- # val=No 00:07:00.443 22:33:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # IFS=: 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # read -r var val 00:07:00.443 22:33:45 -- accel/accel.sh@21 -- # val= 00:07:00.443 22:33:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # IFS=: 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # read -r var val 00:07:00.443 22:33:45 -- accel/accel.sh@21 -- # val= 00:07:00.443 22:33:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # IFS=: 00:07:00.443 22:33:45 -- accel/accel.sh@20 -- # read -r var val 00:07:01.383 22:33:46 -- accel/accel.sh@21 -- # val= 00:07:01.383 22:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.383 22:33:46 -- accel/accel.sh@20 -- # IFS=: 00:07:01.383 22:33:46 -- accel/accel.sh@20 -- # read -r var val 00:07:01.383 22:33:46 -- accel/accel.sh@21 -- # val= 00:07:01.383 22:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.383 22:33:46 -- accel/accel.sh@20 -- # IFS=: 00:07:01.383 22:33:46 -- accel/accel.sh@20 -- # read -r var val 00:07:01.383 22:33:46 -- accel/accel.sh@21 -- # val= 00:07:01.383 22:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.383 22:33:46 -- accel/accel.sh@20 -- # IFS=: 00:07:01.383 22:33:46 -- accel/accel.sh@20 -- # read -r var val 00:07:01.383 22:33:46 -- accel/accel.sh@21 -- # val= 00:07:01.383 22:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.383 22:33:46 -- accel/accel.sh@20 -- # IFS=: 00:07:01.383 22:33:46 -- accel/accel.sh@20 -- # read -r var val 00:07:01.383 22:33:46 -- accel/accel.sh@21 -- # val= 00:07:01.383 22:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.383 22:33:46 -- accel/accel.sh@20 -- # IFS=: 00:07:01.383 22:33:46 -- accel/accel.sh@20 -- # read -r var val 00:07:01.383 22:33:46 -- accel/accel.sh@21 -- # val= 00:07:01.383 22:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.383 22:33:46 -- accel/accel.sh@20 -- # IFS=: 00:07:01.383 22:33:46 -- accel/accel.sh@20 -- # read -r var val 00:07:01.383 22:33:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.383 22:33:46 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:01.383 22:33:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.383 00:07:01.383 real 0m2.574s 00:07:01.383 user 0m2.372s 00:07:01.383 sys 0m0.210s 00:07:01.383 22:33:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.384 22:33:46 -- common/autotest_common.sh@10 -- # set +x 00:07:01.384 ************************************ 00:07:01.384 END TEST accel_dif_generate 00:07:01.384 ************************************ 00:07:01.384 22:33:46 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:01.384 22:33:46 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:01.384 22:33:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.384 22:33:46 -- common/autotest_common.sh@10 -- # set +x 00:07:01.384 ************************************ 00:07:01.384 START TEST accel_dif_generate_copy 00:07:01.384 ************************************ 00:07:01.384 22:33:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:01.384 22:33:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.384 22:33:46 -- accel/accel.sh@17 -- # local accel_module 00:07:01.384 22:33:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:01.384 22:33:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:01.384 22:33:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.384 22:33:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.384 22:33:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.384 22:33:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.384 22:33:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.384 22:33:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.384 22:33:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.384 22:33:46 -- accel/accel.sh@42 -- # jq -r . 00:07:01.384 [2024-04-15 22:33:46.191608] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:01.384 [2024-04-15 22:33:46.191682] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid912295 ] 00:07:01.644 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.644 [2024-04-15 22:33:46.259513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.644 [2024-04-15 22:33:46.321880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.084 22:33:47 -- accel/accel.sh@18 -- # out=' 00:07:03.084 SPDK Configuration: 00:07:03.084 Core mask: 0x1 00:07:03.084 00:07:03.084 Accel Perf Configuration: 00:07:03.085 Workload Type: dif_generate_copy 00:07:03.085 Vector size: 4096 bytes 00:07:03.085 Transfer size: 4096 bytes 00:07:03.085 Vector count 1 00:07:03.085 Module: software 00:07:03.085 Queue depth: 32 00:07:03.085 Allocate depth: 32 00:07:03.085 # threads/core: 1 00:07:03.085 Run time: 1 seconds 00:07:03.085 Verify: No 00:07:03.085 00:07:03.085 Running for 1 seconds... 00:07:03.085 00:07:03.085 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.085 ------------------------------------------------------------------------------------ 00:07:03.085 0,0 87488/s 347 MiB/s 0 0 00:07:03.085 ==================================================================================== 00:07:03.085 Total 87488/s 341 MiB/s 0 0' 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # IFS=: 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # read -r var val 00:07:03.085 22:33:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:03.085 22:33:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:03.085 22:33:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.085 22:33:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.085 22:33:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.085 22:33:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.085 22:33:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.085 22:33:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.085 22:33:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.085 22:33:47 -- accel/accel.sh@42 -- # jq -r . 00:07:03.085 [2024-04-15 22:33:47.476108] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:03.085 [2024-04-15 22:33:47.476188] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid912635 ] 00:07:03.085 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.085 [2024-04-15 22:33:47.543370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.085 [2024-04-15 22:33:47.606537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.085 22:33:47 -- accel/accel.sh@21 -- # val= 00:07:03.085 22:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # IFS=: 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # read -r var val 00:07:03.085 22:33:47 -- accel/accel.sh@21 -- # val= 00:07:03.085 22:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # IFS=: 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # read -r var val 00:07:03.085 22:33:47 -- accel/accel.sh@21 -- # val=0x1 00:07:03.085 22:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # IFS=: 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # read -r var val 00:07:03.085 22:33:47 -- accel/accel.sh@21 -- # val= 00:07:03.085 22:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # IFS=: 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # read -r var val 00:07:03.085 22:33:47 -- accel/accel.sh@21 -- # val= 00:07:03.085 22:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # IFS=: 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # read -r var val 00:07:03.085 22:33:47 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:03.085 22:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.085 22:33:47 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # IFS=: 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # read -r var val 00:07:03.085 22:33:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.085 22:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # IFS=: 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # read -r var val 00:07:03.085 22:33:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.085 22:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # IFS=: 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # read -r var val 00:07:03.085 22:33:47 -- accel/accel.sh@21 -- # val= 00:07:03.085 22:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # IFS=: 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # read -r var val 00:07:03.085 22:33:47 -- accel/accel.sh@21 -- # val=software 00:07:03.085 22:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.085 22:33:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # IFS=: 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # read -r var val 00:07:03.085 22:33:47 -- accel/accel.sh@21 -- # val=32 00:07:03.085 22:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # IFS=: 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # read -r var val 00:07:03.085 22:33:47 -- accel/accel.sh@21 -- # val=32 00:07:03.085 22:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # IFS=: 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # read -r var val 00:07:03.085 22:33:47 -- accel/accel.sh@21 -- # val=1 00:07:03.085 22:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # IFS=: 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # read -r var val 00:07:03.085 22:33:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.085 22:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # IFS=: 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # read -r var val 00:07:03.085 22:33:47 -- accel/accel.sh@21 -- # val=No 00:07:03.085 22:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # IFS=: 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # read -r var val 00:07:03.085 22:33:47 -- accel/accel.sh@21 -- # val= 00:07:03.085 22:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # IFS=: 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # read -r var val 00:07:03.085 22:33:47 -- accel/accel.sh@21 -- # val= 00:07:03.085 22:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # IFS=: 00:07:03.085 22:33:47 -- accel/accel.sh@20 -- # read -r var val 00:07:04.026 22:33:48 -- accel/accel.sh@21 -- # val= 00:07:04.026 22:33:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.026 22:33:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.026 22:33:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.026 22:33:48 -- accel/accel.sh@21 -- # val= 00:07:04.026 22:33:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.026 22:33:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.026 22:33:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.026 22:33:48 -- accel/accel.sh@21 -- # val= 00:07:04.026 22:33:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.026 22:33:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.026 22:33:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.026 22:33:48 -- accel/accel.sh@21 -- # val= 00:07:04.026 22:33:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.026 22:33:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.026 22:33:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.026 22:33:48 -- accel/accel.sh@21 -- # val= 00:07:04.026 22:33:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.026 22:33:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.026 22:33:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.026 22:33:48 -- accel/accel.sh@21 -- # val= 00:07:04.026 22:33:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.026 22:33:48 -- accel/accel.sh@20 -- # IFS=: 00:07:04.026 22:33:48 -- accel/accel.sh@20 -- # read -r var val 00:07:04.026 22:33:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.026 22:33:48 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:04.026 22:33:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.026 00:07:04.026 real 0m2.573s 00:07:04.026 user 0m2.354s 00:07:04.026 sys 0m0.225s 00:07:04.026 22:33:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.026 22:33:48 -- common/autotest_common.sh@10 -- # set +x 00:07:04.026 ************************************ 00:07:04.026 END TEST accel_dif_generate_copy 00:07:04.026 ************************************ 00:07:04.026 22:33:48 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:04.026 22:33:48 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:04.026 22:33:48 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:04.026 22:33:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:04.026 22:33:48 -- common/autotest_common.sh@10 -- # set +x 00:07:04.026 ************************************ 00:07:04.026 START TEST accel_comp 00:07:04.026 ************************************ 00:07:04.026 22:33:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:04.026 22:33:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.026 22:33:48 -- accel/accel.sh@17 -- # local accel_module 00:07:04.026 22:33:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:04.027 22:33:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:04.027 22:33:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.027 22:33:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.027 22:33:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.027 22:33:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.027 22:33:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.027 22:33:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.027 22:33:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.027 22:33:48 -- accel/accel.sh@42 -- # jq -r . 00:07:04.027 [2024-04-15 22:33:48.807158] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:04.027 [2024-04-15 22:33:48.807251] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid912987 ] 00:07:04.287 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.287 [2024-04-15 22:33:48.882412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.287 [2024-04-15 22:33:48.945598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.670 22:33:50 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:05.670 00:07:05.670 SPDK Configuration: 00:07:05.670 Core mask: 0x1 00:07:05.670 00:07:05.670 Accel Perf Configuration: 00:07:05.670 Workload Type: compress 00:07:05.670 Transfer size: 4096 bytes 00:07:05.670 Vector count 1 00:07:05.670 Module: software 00:07:05.670 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.670 Queue depth: 32 00:07:05.670 Allocate depth: 32 00:07:05.670 # threads/core: 1 00:07:05.670 Run time: 1 seconds 00:07:05.670 Verify: No 00:07:05.670 00:07:05.670 Running for 1 seconds... 00:07:05.670 00:07:05.670 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.670 ------------------------------------------------------------------------------------ 00:07:05.670 0,0 47648/s 198 MiB/s 0 0 00:07:05.670 ==================================================================================== 00:07:05.670 Total 47648/s 186 MiB/s 0 0' 00:07:05.670 22:33:50 -- accel/accel.sh@20 -- # IFS=: 00:07:05.670 22:33:50 -- accel/accel.sh@20 -- # read -r var val 00:07:05.670 22:33:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.670 22:33:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.670 22:33:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.670 22:33:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.670 22:33:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.670 22:33:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.670 22:33:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.670 22:33:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.670 22:33:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.670 22:33:50 -- accel/accel.sh@42 -- # jq -r . 00:07:05.670 [2024-04-15 22:33:50.107215] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:05.670 [2024-04-15 22:33:50.107320] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid913175 ] 00:07:05.670 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.670 [2024-04-15 22:33:50.176812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.670 [2024-04-15 22:33:50.240517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.670 22:33:50 -- accel/accel.sh@21 -- # val= 00:07:05.670 22:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.670 22:33:50 -- accel/accel.sh@20 -- # IFS=: 00:07:05.670 22:33:50 -- accel/accel.sh@20 -- # read -r var val 00:07:05.670 22:33:50 -- accel/accel.sh@21 -- # val= 00:07:05.670 22:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.670 22:33:50 -- accel/accel.sh@20 -- # IFS=: 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # read -r var val 00:07:05.671 22:33:50 -- accel/accel.sh@21 -- # val= 00:07:05.671 22:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # IFS=: 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # read -r var val 00:07:05.671 22:33:50 -- accel/accel.sh@21 -- # val=0x1 00:07:05.671 22:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # IFS=: 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # read -r var val 00:07:05.671 22:33:50 -- accel/accel.sh@21 -- # val= 00:07:05.671 22:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # IFS=: 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # read -r var val 00:07:05.671 22:33:50 -- accel/accel.sh@21 -- # val= 00:07:05.671 22:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # IFS=: 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # read -r var val 00:07:05.671 22:33:50 -- accel/accel.sh@21 -- # val=compress 00:07:05.671 22:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.671 22:33:50 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # IFS=: 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # read -r var val 00:07:05.671 22:33:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.671 22:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # IFS=: 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # read -r var val 00:07:05.671 22:33:50 -- accel/accel.sh@21 -- # val= 00:07:05.671 22:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # IFS=: 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # read -r var val 00:07:05.671 22:33:50 -- accel/accel.sh@21 -- # val=software 00:07:05.671 22:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.671 22:33:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # IFS=: 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # read -r var val 00:07:05.671 22:33:50 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.671 22:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # IFS=: 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # read -r var val 00:07:05.671 22:33:50 -- accel/accel.sh@21 -- # val=32 00:07:05.671 22:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # IFS=: 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # read -r var val 00:07:05.671 22:33:50 -- accel/accel.sh@21 -- # val=32 00:07:05.671 22:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # IFS=: 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # read -r var val 00:07:05.671 22:33:50 -- accel/accel.sh@21 -- # val=1 00:07:05.671 22:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # IFS=: 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # read -r var val 00:07:05.671 22:33:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.671 22:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # IFS=: 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # read -r var val 00:07:05.671 22:33:50 -- accel/accel.sh@21 -- # val=No 00:07:05.671 22:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # IFS=: 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # read -r var val 00:07:05.671 22:33:50 -- accel/accel.sh@21 -- # val= 00:07:05.671 22:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # IFS=: 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # read -r var val 00:07:05.671 22:33:50 -- accel/accel.sh@21 -- # val= 00:07:05.671 22:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # IFS=: 00:07:05.671 22:33:50 -- accel/accel.sh@20 -- # read -r var val 00:07:06.615 22:33:51 -- accel/accel.sh@21 -- # val= 00:07:06.615 22:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.615 22:33:51 -- accel/accel.sh@20 -- # IFS=: 00:07:06.615 22:33:51 -- accel/accel.sh@20 -- # read -r var val 00:07:06.615 22:33:51 -- accel/accel.sh@21 -- # val= 00:07:06.615 22:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.615 22:33:51 -- accel/accel.sh@20 -- # IFS=: 00:07:06.615 22:33:51 -- accel/accel.sh@20 -- # read -r var val 00:07:06.615 22:33:51 -- accel/accel.sh@21 -- # val= 00:07:06.615 22:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.615 22:33:51 -- accel/accel.sh@20 -- # IFS=: 00:07:06.615 22:33:51 -- accel/accel.sh@20 -- # read -r var val 00:07:06.615 22:33:51 -- accel/accel.sh@21 -- # val= 00:07:06.615 22:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.615 22:33:51 -- accel/accel.sh@20 -- # IFS=: 00:07:06.615 22:33:51 -- accel/accel.sh@20 -- # read -r var val 00:07:06.615 22:33:51 -- accel/accel.sh@21 -- # val= 00:07:06.615 22:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.615 22:33:51 -- accel/accel.sh@20 -- # IFS=: 00:07:06.615 22:33:51 -- accel/accel.sh@20 -- # read -r var val 00:07:06.615 22:33:51 -- accel/accel.sh@21 -- # val= 00:07:06.615 22:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.615 22:33:51 -- accel/accel.sh@20 -- # IFS=: 00:07:06.615 22:33:51 -- accel/accel.sh@20 -- # read -r var val 00:07:06.615 22:33:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.615 22:33:51 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:06.615 22:33:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.615 00:07:06.615 real 0m2.594s 00:07:06.615 user 0m2.388s 00:07:06.615 sys 0m0.213s 00:07:06.615 22:33:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.615 22:33:51 -- common/autotest_common.sh@10 -- # set +x 00:07:06.615 ************************************ 00:07:06.615 END TEST accel_comp 00:07:06.615 ************************************ 00:07:06.615 22:33:51 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.615 22:33:51 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:06.615 22:33:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.615 22:33:51 -- common/autotest_common.sh@10 -- # set +x 00:07:06.615 ************************************ 00:07:06.615 START TEST accel_decomp 00:07:06.615 ************************************ 00:07:06.615 22:33:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.615 22:33:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.615 22:33:51 -- accel/accel.sh@17 -- # local accel_module 00:07:06.615 22:33:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.615 22:33:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.615 22:33:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.615 22:33:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.615 22:33:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.615 22:33:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.615 22:33:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.615 22:33:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.615 22:33:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.877 22:33:51 -- accel/accel.sh@42 -- # jq -r . 00:07:06.877 [2024-04-15 22:33:51.445935] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:06.877 [2024-04-15 22:33:51.446013] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid913378 ] 00:07:06.877 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.877 [2024-04-15 22:33:51.514957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.877 [2024-04-15 22:33:51.583092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.263 22:33:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:08.263 00:07:08.263 SPDK Configuration: 00:07:08.263 Core mask: 0x1 00:07:08.263 00:07:08.263 Accel Perf Configuration: 00:07:08.263 Workload Type: decompress 00:07:08.263 Transfer size: 4096 bytes 00:07:08.263 Vector count 1 00:07:08.263 Module: software 00:07:08.263 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:08.263 Queue depth: 32 00:07:08.263 Allocate depth: 32 00:07:08.263 # threads/core: 1 00:07:08.263 Run time: 1 seconds 00:07:08.263 Verify: Yes 00:07:08.263 00:07:08.263 Running for 1 seconds... 00:07:08.263 00:07:08.263 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.263 ------------------------------------------------------------------------------------ 00:07:08.263 0,0 63200/s 116 MiB/s 0 0 00:07:08.263 ==================================================================================== 00:07:08.263 Total 63200/s 246 MiB/s 0 0' 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # IFS=: 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # read -r var val 00:07:08.263 22:33:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:08.263 22:33:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:08.263 22:33:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.263 22:33:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.263 22:33:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.263 22:33:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.263 22:33:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.263 22:33:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.263 22:33:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.263 22:33:52 -- accel/accel.sh@42 -- # jq -r . 00:07:08.263 [2024-04-15 22:33:52.740368] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:08.263 [2024-04-15 22:33:52.740475] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid913698 ] 00:07:08.263 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.263 [2024-04-15 22:33:52.809281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.263 [2024-04-15 22:33:52.872298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.263 22:33:52 -- accel/accel.sh@21 -- # val= 00:07:08.263 22:33:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # IFS=: 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # read -r var val 00:07:08.263 22:33:52 -- accel/accel.sh@21 -- # val= 00:07:08.263 22:33:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # IFS=: 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # read -r var val 00:07:08.263 22:33:52 -- accel/accel.sh@21 -- # val= 00:07:08.263 22:33:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # IFS=: 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # read -r var val 00:07:08.263 22:33:52 -- accel/accel.sh@21 -- # val=0x1 00:07:08.263 22:33:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # IFS=: 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # read -r var val 00:07:08.263 22:33:52 -- accel/accel.sh@21 -- # val= 00:07:08.263 22:33:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # IFS=: 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # read -r var val 00:07:08.263 22:33:52 -- accel/accel.sh@21 -- # val= 00:07:08.263 22:33:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # IFS=: 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # read -r var val 00:07:08.263 22:33:52 -- accel/accel.sh@21 -- # val=decompress 00:07:08.263 22:33:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.263 22:33:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # IFS=: 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # read -r var val 00:07:08.263 22:33:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.263 22:33:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # IFS=: 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # read -r var val 00:07:08.263 22:33:52 -- accel/accel.sh@21 -- # val= 00:07:08.263 22:33:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # IFS=: 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # read -r var val 00:07:08.263 22:33:52 -- accel/accel.sh@21 -- # val=software 00:07:08.263 22:33:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.263 22:33:52 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # IFS=: 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # read -r var val 00:07:08.263 22:33:52 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:08.263 22:33:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # IFS=: 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # read -r var val 00:07:08.263 22:33:52 -- accel/accel.sh@21 -- # val=32 00:07:08.263 22:33:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # IFS=: 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # read -r var val 00:07:08.263 22:33:52 -- accel/accel.sh@21 -- # val=32 00:07:08.263 22:33:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # IFS=: 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # read -r var val 00:07:08.263 22:33:52 -- accel/accel.sh@21 -- # val=1 00:07:08.263 22:33:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # IFS=: 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # read -r var val 00:07:08.263 22:33:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.263 22:33:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # IFS=: 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # read -r var val 00:07:08.263 22:33:52 -- accel/accel.sh@21 -- # val=Yes 00:07:08.263 22:33:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # IFS=: 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # read -r var val 00:07:08.263 22:33:52 -- accel/accel.sh@21 -- # val= 00:07:08.263 22:33:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # IFS=: 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # read -r var val 00:07:08.263 22:33:52 -- accel/accel.sh@21 -- # val= 00:07:08.263 22:33:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # IFS=: 00:07:08.263 22:33:52 -- accel/accel.sh@20 -- # read -r var val 00:07:09.206 22:33:53 -- accel/accel.sh@21 -- # val= 00:07:09.206 22:33:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.206 22:33:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.206 22:33:53 -- accel/accel.sh@20 -- # read -r var val 00:07:09.206 22:33:53 -- accel/accel.sh@21 -- # val= 00:07:09.206 22:33:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.206 22:33:53 -- accel/accel.sh@20 -- # IFS=: 00:07:09.206 22:33:53 -- accel/accel.sh@20 -- # read -r var val 00:07:09.206 22:33:53 -- accel/accel.sh@21 -- # val= 00:07:09.206 22:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.206 22:33:54 -- accel/accel.sh@20 -- # IFS=: 00:07:09.206 22:33:54 -- accel/accel.sh@20 -- # read -r var val 00:07:09.206 22:33:54 -- accel/accel.sh@21 -- # val= 00:07:09.206 22:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.206 22:33:54 -- accel/accel.sh@20 -- # IFS=: 00:07:09.206 22:33:54 -- accel/accel.sh@20 -- # read -r var val 00:07:09.206 22:33:54 -- accel/accel.sh@21 -- # val= 00:07:09.206 22:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.206 22:33:54 -- accel/accel.sh@20 -- # IFS=: 00:07:09.206 22:33:54 -- accel/accel.sh@20 -- # read -r var val 00:07:09.206 22:33:54 -- accel/accel.sh@21 -- # val= 00:07:09.206 22:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.206 22:33:54 -- accel/accel.sh@20 -- # IFS=: 00:07:09.206 22:33:54 -- accel/accel.sh@20 -- # read -r var val 00:07:09.206 22:33:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.206 22:33:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:09.206 22:33:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.206 00:07:09.206 real 0m2.588s 00:07:09.206 user 0m2.392s 00:07:09.206 sys 0m0.203s 00:07:09.206 22:33:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.206 22:33:54 -- common/autotest_common.sh@10 -- # set +x 00:07:09.206 ************************************ 00:07:09.206 END TEST accel_decomp 00:07:09.206 ************************************ 00:07:09.467 22:33:54 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:09.467 22:33:54 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:09.467 22:33:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.467 22:33:54 -- common/autotest_common.sh@10 -- # set +x 00:07:09.467 ************************************ 00:07:09.467 START TEST accel_decmop_full 00:07:09.467 ************************************ 00:07:09.467 22:33:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:09.467 22:33:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.467 22:33:54 -- accel/accel.sh@17 -- # local accel_module 00:07:09.467 22:33:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:09.467 22:33:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:09.467 22:33:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.467 22:33:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.467 22:33:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.467 22:33:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.467 22:33:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.467 22:33:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.467 22:33:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.467 22:33:54 -- accel/accel.sh@42 -- # jq -r . 00:07:09.467 [2024-04-15 22:33:54.075093] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:09.468 [2024-04-15 22:33:54.075166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid914048 ] 00:07:09.468 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.468 [2024-04-15 22:33:54.155673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.468 [2024-04-15 22:33:54.221498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.855 22:33:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:10.855 00:07:10.855 SPDK Configuration: 00:07:10.855 Core mask: 0x1 00:07:10.855 00:07:10.855 Accel Perf Configuration: 00:07:10.855 Workload Type: decompress 00:07:10.855 Transfer size: 111250 bytes 00:07:10.855 Vector count 1 00:07:10.855 Module: software 00:07:10.855 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:10.855 Queue depth: 32 00:07:10.855 Allocate depth: 32 00:07:10.855 # threads/core: 1 00:07:10.855 Run time: 1 seconds 00:07:10.855 Verify: Yes 00:07:10.855 00:07:10.855 Running for 1 seconds... 00:07:10.855 00:07:10.855 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.855 ------------------------------------------------------------------------------------ 00:07:10.855 0,0 4064/s 167 MiB/s 0 0 00:07:10.855 ==================================================================================== 00:07:10.855 Total 4064/s 431 MiB/s 0 0' 00:07:10.855 22:33:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.855 22:33:55 -- accel/accel.sh@20 -- # read -r var val 00:07:10.855 22:33:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:10.855 22:33:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.855 22:33:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.855 22:33:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:10.855 22:33:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.855 22:33:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.855 22:33:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.855 22:33:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.855 22:33:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.855 22:33:55 -- accel/accel.sh@42 -- # jq -r . 00:07:10.855 [2024-04-15 22:33:55.386747] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:10.855 [2024-04-15 22:33:55.386821] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid914329 ] 00:07:10.855 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.855 [2024-04-15 22:33:55.453142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.855 [2024-04-15 22:33:55.516214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.855 22:33:55 -- accel/accel.sh@21 -- # val= 00:07:10.855 22:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.855 22:33:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.855 22:33:55 -- accel/accel.sh@20 -- # read -r var val 00:07:10.855 22:33:55 -- accel/accel.sh@21 -- # val= 00:07:10.855 22:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.855 22:33:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # read -r var val 00:07:10.856 22:33:55 -- accel/accel.sh@21 -- # val= 00:07:10.856 22:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # read -r var val 00:07:10.856 22:33:55 -- accel/accel.sh@21 -- # val=0x1 00:07:10.856 22:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # read -r var val 00:07:10.856 22:33:55 -- accel/accel.sh@21 -- # val= 00:07:10.856 22:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # read -r var val 00:07:10.856 22:33:55 -- accel/accel.sh@21 -- # val= 00:07:10.856 22:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # read -r var val 00:07:10.856 22:33:55 -- accel/accel.sh@21 -- # val=decompress 00:07:10.856 22:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.856 22:33:55 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # read -r var val 00:07:10.856 22:33:55 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:10.856 22:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # read -r var val 00:07:10.856 22:33:55 -- accel/accel.sh@21 -- # val= 00:07:10.856 22:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # read -r var val 00:07:10.856 22:33:55 -- accel/accel.sh@21 -- # val=software 00:07:10.856 22:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.856 22:33:55 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # read -r var val 00:07:10.856 22:33:55 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:10.856 22:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # read -r var val 00:07:10.856 22:33:55 -- accel/accel.sh@21 -- # val=32 00:07:10.856 22:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # read -r var val 00:07:10.856 22:33:55 -- accel/accel.sh@21 -- # val=32 00:07:10.856 22:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # read -r var val 00:07:10.856 22:33:55 -- accel/accel.sh@21 -- # val=1 00:07:10.856 22:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # read -r var val 00:07:10.856 22:33:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.856 22:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # read -r var val 00:07:10.856 22:33:55 -- accel/accel.sh@21 -- # val=Yes 00:07:10.856 22:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # read -r var val 00:07:10.856 22:33:55 -- accel/accel.sh@21 -- # val= 00:07:10.856 22:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # read -r var val 00:07:10.856 22:33:55 -- accel/accel.sh@21 -- # val= 00:07:10.856 22:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.856 22:33:55 -- accel/accel.sh@20 -- # read -r var val 00:07:12.241 22:33:56 -- accel/accel.sh@21 -- # val= 00:07:12.241 22:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.241 22:33:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.241 22:33:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.241 22:33:56 -- accel/accel.sh@21 -- # val= 00:07:12.241 22:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.241 22:33:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.241 22:33:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.241 22:33:56 -- accel/accel.sh@21 -- # val= 00:07:12.241 22:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.241 22:33:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.241 22:33:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.241 22:33:56 -- accel/accel.sh@21 -- # val= 00:07:12.241 22:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.241 22:33:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.241 22:33:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.241 22:33:56 -- accel/accel.sh@21 -- # val= 00:07:12.241 22:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.241 22:33:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.241 22:33:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.242 22:33:56 -- accel/accel.sh@21 -- # val= 00:07:12.242 22:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.242 22:33:56 -- accel/accel.sh@20 -- # IFS=: 00:07:12.242 22:33:56 -- accel/accel.sh@20 -- # read -r var val 00:07:12.242 22:33:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.242 22:33:56 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:12.242 22:33:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.242 00:07:12.242 real 0m2.609s 00:07:12.242 user 0m2.395s 00:07:12.242 sys 0m0.221s 00:07:12.242 22:33:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.242 22:33:56 -- common/autotest_common.sh@10 -- # set +x 00:07:12.242 ************************************ 00:07:12.242 END TEST accel_decmop_full 00:07:12.242 ************************************ 00:07:12.242 22:33:56 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:12.242 22:33:56 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:12.242 22:33:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.242 22:33:56 -- common/autotest_common.sh@10 -- # set +x 00:07:12.242 ************************************ 00:07:12.242 START TEST accel_decomp_mcore 00:07:12.242 ************************************ 00:07:12.242 22:33:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:12.242 22:33:56 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.242 22:33:56 -- accel/accel.sh@17 -- # local accel_module 00:07:12.242 22:33:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:12.242 22:33:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:12.242 22:33:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.242 22:33:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.242 22:33:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.242 22:33:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.242 22:33:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.242 22:33:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.242 22:33:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.242 22:33:56 -- accel/accel.sh@42 -- # jq -r . 00:07:12.242 [2024-04-15 22:33:56.728113] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:12.242 [2024-04-15 22:33:56.728186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid914523 ] 00:07:12.242 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.242 [2024-04-15 22:33:56.796446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.242 [2024-04-15 22:33:56.862912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.242 [2024-04-15 22:33:56.863051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.242 [2024-04-15 22:33:56.863210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.242 [2024-04-15 22:33:56.863210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.625 22:33:57 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:13.625 00:07:13.625 SPDK Configuration: 00:07:13.625 Core mask: 0xf 00:07:13.625 00:07:13.625 Accel Perf Configuration: 00:07:13.625 Workload Type: decompress 00:07:13.625 Transfer size: 4096 bytes 00:07:13.625 Vector count 1 00:07:13.625 Module: software 00:07:13.625 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:13.625 Queue depth: 32 00:07:13.625 Allocate depth: 32 00:07:13.625 # threads/core: 1 00:07:13.625 Run time: 1 seconds 00:07:13.625 Verify: Yes 00:07:13.625 00:07:13.625 Running for 1 seconds... 00:07:13.625 00:07:13.625 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.625 ------------------------------------------------------------------------------------ 00:07:13.625 0,0 58496/s 107 MiB/s 0 0 00:07:13.625 3,0 58496/s 107 MiB/s 0 0 00:07:13.625 2,0 86560/s 159 MiB/s 0 0 00:07:13.625 1,0 58592/s 107 MiB/s 0 0 00:07:13.625 ==================================================================================== 00:07:13.626 Total 262144/s 1024 MiB/s 0 0' 00:07:13.626 22:33:57 -- accel/accel.sh@20 -- # IFS=: 00:07:13.626 22:33:57 -- accel/accel.sh@20 -- # read -r var val 00:07:13.626 22:33:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:13.626 22:33:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:13.626 22:33:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.626 22:33:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.626 22:33:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.626 22:33:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.626 22:33:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.626 22:33:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.626 22:33:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.626 22:33:58 -- accel/accel.sh@42 -- # jq -r . 00:07:13.626 [2024-04-15 22:33:58.024972] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:13.626 [2024-04-15 22:33:58.025073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid914762 ] 00:07:13.626 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.626 [2024-04-15 22:33:58.093591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.626 [2024-04-15 22:33:58.159235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.626 [2024-04-15 22:33:58.159373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.626 [2024-04-15 22:33:58.159534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.626 [2024-04-15 22:33:58.159534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.626 22:33:58 -- accel/accel.sh@21 -- # val= 00:07:13.626 22:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.626 22:33:58 -- accel/accel.sh@21 -- # val= 00:07:13.626 22:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.626 22:33:58 -- accel/accel.sh@21 -- # val= 00:07:13.626 22:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.626 22:33:58 -- accel/accel.sh@21 -- # val=0xf 00:07:13.626 22:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.626 22:33:58 -- accel/accel.sh@21 -- # val= 00:07:13.626 22:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.626 22:33:58 -- accel/accel.sh@21 -- # val= 00:07:13.626 22:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.626 22:33:58 -- accel/accel.sh@21 -- # val=decompress 00:07:13.626 22:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.626 22:33:58 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.626 22:33:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.626 22:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.626 22:33:58 -- accel/accel.sh@21 -- # val= 00:07:13.626 22:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.626 22:33:58 -- accel/accel.sh@21 -- # val=software 00:07:13.626 22:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.626 22:33:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.626 22:33:58 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:13.626 22:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.626 22:33:58 -- accel/accel.sh@21 -- # val=32 00:07:13.626 22:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.626 22:33:58 -- accel/accel.sh@21 -- # val=32 00:07:13.626 22:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.626 22:33:58 -- accel/accel.sh@21 -- # val=1 00:07:13.626 22:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.626 22:33:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.626 22:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.626 22:33:58 -- accel/accel.sh@21 -- # val=Yes 00:07:13.626 22:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.626 22:33:58 -- accel/accel.sh@21 -- # val= 00:07:13.626 22:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.626 22:33:58 -- accel/accel.sh@21 -- # val= 00:07:13.626 22:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.626 22:33:58 -- accel/accel.sh@20 -- # read -r var val 00:07:14.567 22:33:59 -- accel/accel.sh@21 -- # val= 00:07:14.567 22:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.567 22:33:59 -- accel/accel.sh@20 -- # IFS=: 00:07:14.567 22:33:59 -- accel/accel.sh@20 -- # read -r var val 00:07:14.567 22:33:59 -- accel/accel.sh@21 -- # val= 00:07:14.567 22:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.567 22:33:59 -- accel/accel.sh@20 -- # IFS=: 00:07:14.567 22:33:59 -- accel/accel.sh@20 -- # read -r var val 00:07:14.567 22:33:59 -- accel/accel.sh@21 -- # val= 00:07:14.567 22:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.567 22:33:59 -- accel/accel.sh@20 -- # IFS=: 00:07:14.567 22:33:59 -- accel/accel.sh@20 -- # read -r var val 00:07:14.567 22:33:59 -- accel/accel.sh@21 -- # val= 00:07:14.567 22:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.567 22:33:59 -- accel/accel.sh@20 -- # IFS=: 00:07:14.567 22:33:59 -- accel/accel.sh@20 -- # read -r var val 00:07:14.567 22:33:59 -- accel/accel.sh@21 -- # val= 00:07:14.567 22:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.567 22:33:59 -- accel/accel.sh@20 -- # IFS=: 00:07:14.567 22:33:59 -- accel/accel.sh@20 -- # read -r var val 00:07:14.567 22:33:59 -- accel/accel.sh@21 -- # val= 00:07:14.567 22:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.567 22:33:59 -- accel/accel.sh@20 -- # IFS=: 00:07:14.567 22:33:59 -- accel/accel.sh@20 -- # read -r var val 00:07:14.567 22:33:59 -- accel/accel.sh@21 -- # val= 00:07:14.567 22:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.567 22:33:59 -- accel/accel.sh@20 -- # IFS=: 00:07:14.567 22:33:59 -- accel/accel.sh@20 -- # read -r var val 00:07:14.567 22:33:59 -- accel/accel.sh@21 -- # val= 00:07:14.567 22:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.567 22:33:59 -- accel/accel.sh@20 -- # IFS=: 00:07:14.567 22:33:59 -- accel/accel.sh@20 -- # read -r var val 00:07:14.567 22:33:59 -- accel/accel.sh@21 -- # val= 00:07:14.567 22:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.567 22:33:59 -- accel/accel.sh@20 -- # IFS=: 00:07:14.567 22:33:59 -- accel/accel.sh@20 -- # read -r var val 00:07:14.567 22:33:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.567 22:33:59 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:14.567 22:33:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.567 00:07:14.567 real 0m2.600s 00:07:14.567 user 0m8.856s 00:07:14.567 sys 0m0.225s 00:07:14.567 22:33:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.567 22:33:59 -- common/autotest_common.sh@10 -- # set +x 00:07:14.567 ************************************ 00:07:14.567 END TEST accel_decomp_mcore 00:07:14.567 ************************************ 00:07:14.567 22:33:59 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:14.567 22:33:59 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:14.567 22:33:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.567 22:33:59 -- common/autotest_common.sh@10 -- # set +x 00:07:14.567 ************************************ 00:07:14.567 START TEST accel_decomp_full_mcore 00:07:14.567 ************************************ 00:07:14.567 22:33:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:14.567 22:33:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.567 22:33:59 -- accel/accel.sh@17 -- # local accel_module 00:07:14.567 22:33:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:14.567 22:33:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:14.567 22:33:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.567 22:33:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.567 22:33:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.567 22:33:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.567 22:33:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.567 22:33:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.567 22:33:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.567 22:33:59 -- accel/accel.sh@42 -- # jq -r . 00:07:14.567 [2024-04-15 22:33:59.364249] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:14.567 [2024-04-15 22:33:59.364371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid915114 ] 00:07:14.828 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.828 [2024-04-15 22:33:59.440059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.828 [2024-04-15 22:33:59.505531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.828 [2024-04-15 22:33:59.505670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.828 [2024-04-15 22:33:59.505907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.828 [2024-04-15 22:33:59.505908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.210 22:34:00 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:16.210 00:07:16.210 SPDK Configuration: 00:07:16.210 Core mask: 0xf 00:07:16.210 00:07:16.210 Accel Perf Configuration: 00:07:16.210 Workload Type: decompress 00:07:16.210 Transfer size: 111250 bytes 00:07:16.210 Vector count 1 00:07:16.210 Module: software 00:07:16.210 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.210 Queue depth: 32 00:07:16.210 Allocate depth: 32 00:07:16.210 # threads/core: 1 00:07:16.210 Run time: 1 seconds 00:07:16.210 Verify: Yes 00:07:16.210 00:07:16.210 Running for 1 seconds... 00:07:16.210 00:07:16.210 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.210 ------------------------------------------------------------------------------------ 00:07:16.210 0,0 4064/s 167 MiB/s 0 0 00:07:16.210 3,0 4096/s 169 MiB/s 0 0 00:07:16.210 2,0 5920/s 244 MiB/s 0 0 00:07:16.210 1,0 4064/s 167 MiB/s 0 0 00:07:16.210 ==================================================================================== 00:07:16.210 Total 18144/s 1925 MiB/s 0 0' 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.210 22:34:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.210 22:34:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.210 22:34:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.210 22:34:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.210 22:34:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.210 22:34:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.210 22:34:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.210 22:34:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.210 22:34:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.210 22:34:00 -- accel/accel.sh@42 -- # jq -r . 00:07:16.210 [2024-04-15 22:34:00.678727] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:16.210 [2024-04-15 22:34:00.678817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid915455 ] 00:07:16.210 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.210 [2024-04-15 22:34:00.747152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.210 [2024-04-15 22:34:00.812725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.210 [2024-04-15 22:34:00.812947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.210 [2024-04-15 22:34:00.813105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.210 [2024-04-15 22:34:00.813106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.210 22:34:00 -- accel/accel.sh@21 -- # val= 00:07:16.210 22:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.210 22:34:00 -- accel/accel.sh@21 -- # val= 00:07:16.210 22:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.210 22:34:00 -- accel/accel.sh@21 -- # val= 00:07:16.210 22:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.210 22:34:00 -- accel/accel.sh@21 -- # val=0xf 00:07:16.210 22:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.210 22:34:00 -- accel/accel.sh@21 -- # val= 00:07:16.210 22:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.210 22:34:00 -- accel/accel.sh@21 -- # val= 00:07:16.210 22:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.210 22:34:00 -- accel/accel.sh@21 -- # val=decompress 00:07:16.210 22:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.210 22:34:00 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.210 22:34:00 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:16.210 22:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.210 22:34:00 -- accel/accel.sh@21 -- # val= 00:07:16.210 22:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.210 22:34:00 -- accel/accel.sh@21 -- # val=software 00:07:16.210 22:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.210 22:34:00 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.210 22:34:00 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.210 22:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.210 22:34:00 -- accel/accel.sh@21 -- # val=32 00:07:16.210 22:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.210 22:34:00 -- accel/accel.sh@21 -- # val=32 00:07:16.210 22:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.210 22:34:00 -- accel/accel.sh@21 -- # val=1 00:07:16.210 22:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.210 22:34:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.210 22:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.210 22:34:00 -- accel/accel.sh@21 -- # val=Yes 00:07:16.210 22:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.210 22:34:00 -- accel/accel.sh@21 -- # val= 00:07:16.210 22:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.210 22:34:00 -- accel/accel.sh@20 -- # read -r var val 00:07:16.210 22:34:00 -- accel/accel.sh@21 -- # val= 00:07:16.211 22:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.211 22:34:00 -- accel/accel.sh@20 -- # IFS=: 00:07:16.211 22:34:00 -- accel/accel.sh@20 -- # read -r var val 00:07:17.149 22:34:01 -- accel/accel.sh@21 -- # val= 00:07:17.149 22:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.149 22:34:01 -- accel/accel.sh@20 -- # IFS=: 00:07:17.149 22:34:01 -- accel/accel.sh@20 -- # read -r var val 00:07:17.149 22:34:01 -- accel/accel.sh@21 -- # val= 00:07:17.149 22:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.149 22:34:01 -- accel/accel.sh@20 -- # IFS=: 00:07:17.149 22:34:01 -- accel/accel.sh@20 -- # read -r var val 00:07:17.149 22:34:01 -- accel/accel.sh@21 -- # val= 00:07:17.149 22:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.149 22:34:01 -- accel/accel.sh@20 -- # IFS=: 00:07:17.149 22:34:01 -- accel/accel.sh@20 -- # read -r var val 00:07:17.410 22:34:01 -- accel/accel.sh@21 -- # val= 00:07:17.410 22:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.410 22:34:01 -- accel/accel.sh@20 -- # IFS=: 00:07:17.410 22:34:01 -- accel/accel.sh@20 -- # read -r var val 00:07:17.410 22:34:01 -- accel/accel.sh@21 -- # val= 00:07:17.410 22:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.410 22:34:01 -- accel/accel.sh@20 -- # IFS=: 00:07:17.410 22:34:01 -- accel/accel.sh@20 -- # read -r var val 00:07:17.410 22:34:01 -- accel/accel.sh@21 -- # val= 00:07:17.410 22:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.410 22:34:01 -- accel/accel.sh@20 -- # IFS=: 00:07:17.410 22:34:01 -- accel/accel.sh@20 -- # read -r var val 00:07:17.410 22:34:01 -- accel/accel.sh@21 -- # val= 00:07:17.410 22:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.410 22:34:01 -- accel/accel.sh@20 -- # IFS=: 00:07:17.410 22:34:01 -- accel/accel.sh@20 -- # read -r var val 00:07:17.410 22:34:01 -- accel/accel.sh@21 -- # val= 00:07:17.410 22:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.410 22:34:01 -- accel/accel.sh@20 -- # IFS=: 00:07:17.410 22:34:01 -- accel/accel.sh@20 -- # read -r var val 00:07:17.410 22:34:01 -- accel/accel.sh@21 -- # val= 00:07:17.410 22:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.410 22:34:01 -- accel/accel.sh@20 -- # IFS=: 00:07:17.410 22:34:01 -- accel/accel.sh@20 -- # read -r var val 00:07:17.410 22:34:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.410 22:34:01 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:17.410 22:34:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.410 00:07:17.410 real 0m2.628s 00:07:17.410 user 0m8.943s 00:07:17.410 sys 0m0.228s 00:07:17.410 22:34:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.410 22:34:01 -- common/autotest_common.sh@10 -- # set +x 00:07:17.410 ************************************ 00:07:17.410 END TEST accel_decomp_full_mcore 00:07:17.410 ************************************ 00:07:17.410 22:34:01 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:17.410 22:34:01 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:17.410 22:34:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.410 22:34:02 -- common/autotest_common.sh@10 -- # set +x 00:07:17.410 ************************************ 00:07:17.410 START TEST accel_decomp_mthread 00:07:17.410 ************************************ 00:07:17.410 22:34:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:17.410 22:34:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.410 22:34:02 -- accel/accel.sh@17 -- # local accel_module 00:07:17.410 22:34:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:17.410 22:34:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:17.410 22:34:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.410 22:34:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.410 22:34:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.410 22:34:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.410 22:34:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.410 22:34:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.410 22:34:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.410 22:34:02 -- accel/accel.sh@42 -- # jq -r . 00:07:17.410 [2024-04-15 22:34:02.034336] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:17.410 [2024-04-15 22:34:02.034408] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid915723 ] 00:07:17.410 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.410 [2024-04-15 22:34:02.101781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.410 [2024-04-15 22:34:02.170125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.792 22:34:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:18.792 00:07:18.792 SPDK Configuration: 00:07:18.792 Core mask: 0x1 00:07:18.792 00:07:18.792 Accel Perf Configuration: 00:07:18.792 Workload Type: decompress 00:07:18.792 Transfer size: 4096 bytes 00:07:18.792 Vector count 1 00:07:18.792 Module: software 00:07:18.792 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.792 Queue depth: 32 00:07:18.792 Allocate depth: 32 00:07:18.792 # threads/core: 2 00:07:18.792 Run time: 1 seconds 00:07:18.792 Verify: Yes 00:07:18.792 00:07:18.792 Running for 1 seconds... 00:07:18.792 00:07:18.792 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.792 ------------------------------------------------------------------------------------ 00:07:18.792 0,1 31936/s 58 MiB/s 0 0 00:07:18.792 0,0 31840/s 58 MiB/s 0 0 00:07:18.792 ==================================================================================== 00:07:18.792 Total 63776/s 249 MiB/s 0 0' 00:07:18.792 22:34:03 -- accel/accel.sh@20 -- # IFS=: 00:07:18.792 22:34:03 -- accel/accel.sh@20 -- # read -r var val 00:07:18.792 22:34:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:18.792 22:34:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:18.792 22:34:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.792 22:34:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.792 22:34:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.792 22:34:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.792 22:34:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.792 22:34:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.792 22:34:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.792 22:34:03 -- accel/accel.sh@42 -- # jq -r . 00:07:18.792 [2024-04-15 22:34:03.330089] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:18.792 [2024-04-15 22:34:03.330187] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid915868 ] 00:07:18.792 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.792 [2024-04-15 22:34:03.398242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.792 [2024-04-15 22:34:03.461733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.792 22:34:03 -- accel/accel.sh@21 -- # val= 00:07:18.792 22:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.792 22:34:03 -- accel/accel.sh@20 -- # IFS=: 00:07:18.792 22:34:03 -- accel/accel.sh@20 -- # read -r var val 00:07:18.792 22:34:03 -- accel/accel.sh@21 -- # val= 00:07:18.792 22:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.792 22:34:03 -- accel/accel.sh@20 -- # IFS=: 00:07:18.792 22:34:03 -- accel/accel.sh@20 -- # read -r var val 00:07:18.792 22:34:03 -- accel/accel.sh@21 -- # val= 00:07:18.792 22:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.792 22:34:03 -- accel/accel.sh@20 -- # IFS=: 00:07:18.792 22:34:03 -- accel/accel.sh@20 -- # read -r var val 00:07:18.792 22:34:03 -- accel/accel.sh@21 -- # val=0x1 00:07:18.792 22:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.792 22:34:03 -- accel/accel.sh@20 -- # IFS=: 00:07:18.792 22:34:03 -- accel/accel.sh@20 -- # read -r var val 00:07:18.792 22:34:03 -- accel/accel.sh@21 -- # val= 00:07:18.792 22:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.792 22:34:03 -- accel/accel.sh@20 -- # IFS=: 00:07:18.792 22:34:03 -- accel/accel.sh@20 -- # read -r var val 00:07:18.792 22:34:03 -- accel/accel.sh@21 -- # val= 00:07:18.792 22:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.792 22:34:03 -- accel/accel.sh@20 -- # IFS=: 00:07:18.792 22:34:03 -- accel/accel.sh@20 -- # read -r var val 00:07:18.792 22:34:03 -- accel/accel.sh@21 -- # val=decompress 00:07:18.792 22:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.792 22:34:03 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:18.792 22:34:03 -- accel/accel.sh@20 -- # IFS=: 00:07:18.792 22:34:03 -- accel/accel.sh@20 -- # read -r var val 00:07:18.792 22:34:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.792 22:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.792 22:34:03 -- accel/accel.sh@20 -- # IFS=: 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # read -r var val 00:07:18.793 22:34:03 -- accel/accel.sh@21 -- # val= 00:07:18.793 22:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # IFS=: 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # read -r var val 00:07:18.793 22:34:03 -- accel/accel.sh@21 -- # val=software 00:07:18.793 22:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.793 22:34:03 -- accel/accel.sh@23 -- # accel_module=software 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # IFS=: 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # read -r var val 00:07:18.793 22:34:03 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.793 22:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # IFS=: 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # read -r var val 00:07:18.793 22:34:03 -- accel/accel.sh@21 -- # val=32 00:07:18.793 22:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # IFS=: 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # read -r var val 00:07:18.793 22:34:03 -- accel/accel.sh@21 -- # val=32 00:07:18.793 22:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # IFS=: 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # read -r var val 00:07:18.793 22:34:03 -- accel/accel.sh@21 -- # val=2 00:07:18.793 22:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # IFS=: 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # read -r var val 00:07:18.793 22:34:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:18.793 22:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # IFS=: 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # read -r var val 00:07:18.793 22:34:03 -- accel/accel.sh@21 -- # val=Yes 00:07:18.793 22:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # IFS=: 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # read -r var val 00:07:18.793 22:34:03 -- accel/accel.sh@21 -- # val= 00:07:18.793 22:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # IFS=: 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # read -r var val 00:07:18.793 22:34:03 -- accel/accel.sh@21 -- # val= 00:07:18.793 22:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # IFS=: 00:07:18.793 22:34:03 -- accel/accel.sh@20 -- # read -r var val 00:07:20.176 22:34:04 -- accel/accel.sh@21 -- # val= 00:07:20.176 22:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.176 22:34:04 -- accel/accel.sh@20 -- # IFS=: 00:07:20.176 22:34:04 -- accel/accel.sh@20 -- # read -r var val 00:07:20.176 22:34:04 -- accel/accel.sh@21 -- # val= 00:07:20.176 22:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.176 22:34:04 -- accel/accel.sh@20 -- # IFS=: 00:07:20.176 22:34:04 -- accel/accel.sh@20 -- # read -r var val 00:07:20.176 22:34:04 -- accel/accel.sh@21 -- # val= 00:07:20.176 22:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.176 22:34:04 -- accel/accel.sh@20 -- # IFS=: 00:07:20.176 22:34:04 -- accel/accel.sh@20 -- # read -r var val 00:07:20.176 22:34:04 -- accel/accel.sh@21 -- # val= 00:07:20.176 22:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.176 22:34:04 -- accel/accel.sh@20 -- # IFS=: 00:07:20.176 22:34:04 -- accel/accel.sh@20 -- # read -r var val 00:07:20.176 22:34:04 -- accel/accel.sh@21 -- # val= 00:07:20.176 22:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.176 22:34:04 -- accel/accel.sh@20 -- # IFS=: 00:07:20.176 22:34:04 -- accel/accel.sh@20 -- # read -r var val 00:07:20.176 22:34:04 -- accel/accel.sh@21 -- # val= 00:07:20.176 22:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.176 22:34:04 -- accel/accel.sh@20 -- # IFS=: 00:07:20.176 22:34:04 -- accel/accel.sh@20 -- # read -r var val 00:07:20.176 22:34:04 -- accel/accel.sh@21 -- # val= 00:07:20.176 22:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.176 22:34:04 -- accel/accel.sh@20 -- # IFS=: 00:07:20.176 22:34:04 -- accel/accel.sh@20 -- # read -r var val 00:07:20.176 22:34:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.176 22:34:04 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:20.176 22:34:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.176 00:07:20.176 real 0m2.591s 00:07:20.176 user 0m2.401s 00:07:20.176 sys 0m0.199s 00:07:20.176 22:34:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.176 22:34:04 -- common/autotest_common.sh@10 -- # set +x 00:07:20.176 ************************************ 00:07:20.176 END TEST accel_decomp_mthread 00:07:20.176 ************************************ 00:07:20.176 22:34:04 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:20.176 22:34:04 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:20.176 22:34:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.176 22:34:04 -- common/autotest_common.sh@10 -- # set +x 00:07:20.176 ************************************ 00:07:20.176 START TEST accel_deomp_full_mthread 00:07:20.176 ************************************ 00:07:20.176 22:34:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:20.176 22:34:04 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.177 22:34:04 -- accel/accel.sh@17 -- # local accel_module 00:07:20.177 22:34:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:20.177 22:34:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:20.177 22:34:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.177 22:34:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.177 22:34:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.177 22:34:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.177 22:34:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.177 22:34:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.177 22:34:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.177 22:34:04 -- accel/accel.sh@42 -- # jq -r . 00:07:20.177 [2024-04-15 22:34:04.670207] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:20.177 [2024-04-15 22:34:04.670313] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid916184 ] 00:07:20.177 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.177 [2024-04-15 22:34:04.739184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.177 [2024-04-15 22:34:04.801727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.560 22:34:05 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:21.560 00:07:21.560 SPDK Configuration: 00:07:21.560 Core mask: 0x1 00:07:21.560 00:07:21.560 Accel Perf Configuration: 00:07:21.560 Workload Type: decompress 00:07:21.560 Transfer size: 111250 bytes 00:07:21.560 Vector count 1 00:07:21.561 Module: software 00:07:21.561 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:21.561 Queue depth: 32 00:07:21.561 Allocate depth: 32 00:07:21.561 # threads/core: 2 00:07:21.561 Run time: 1 seconds 00:07:21.561 Verify: Yes 00:07:21.561 00:07:21.561 Running for 1 seconds... 00:07:21.561 00:07:21.561 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.561 ------------------------------------------------------------------------------------ 00:07:21.561 0,1 2080/s 85 MiB/s 0 0 00:07:21.561 0,0 2048/s 84 MiB/s 0 0 00:07:21.561 ==================================================================================== 00:07:21.561 Total 4128/s 437 MiB/s 0 0' 00:07:21.561 22:34:05 -- accel/accel.sh@20 -- # IFS=: 00:07:21.561 22:34:05 -- accel/accel.sh@20 -- # read -r var val 00:07:21.561 22:34:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:21.561 22:34:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:21.561 22:34:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.561 22:34:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.561 22:34:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.561 22:34:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.561 22:34:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.561 22:34:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.561 22:34:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.561 22:34:05 -- accel/accel.sh@42 -- # jq -r . 00:07:21.561 [2024-04-15 22:34:05.984906] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:21.561 [2024-04-15 22:34:05.985005] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid916520 ] 00:07:21.561 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.561 [2024-04-15 22:34:06.051765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.561 [2024-04-15 22:34:06.113974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.561 22:34:06 -- accel/accel.sh@21 -- # val= 00:07:21.561 22:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # IFS=: 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # read -r var val 00:07:21.561 22:34:06 -- accel/accel.sh@21 -- # val= 00:07:21.561 22:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # IFS=: 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # read -r var val 00:07:21.561 22:34:06 -- accel/accel.sh@21 -- # val= 00:07:21.561 22:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # IFS=: 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # read -r var val 00:07:21.561 22:34:06 -- accel/accel.sh@21 -- # val=0x1 00:07:21.561 22:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # IFS=: 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # read -r var val 00:07:21.561 22:34:06 -- accel/accel.sh@21 -- # val= 00:07:21.561 22:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # IFS=: 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # read -r var val 00:07:21.561 22:34:06 -- accel/accel.sh@21 -- # val= 00:07:21.561 22:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # IFS=: 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # read -r var val 00:07:21.561 22:34:06 -- accel/accel.sh@21 -- # val=decompress 00:07:21.561 22:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.561 22:34:06 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # IFS=: 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # read -r var val 00:07:21.561 22:34:06 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:21.561 22:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # IFS=: 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # read -r var val 00:07:21.561 22:34:06 -- accel/accel.sh@21 -- # val= 00:07:21.561 22:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # IFS=: 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # read -r var val 00:07:21.561 22:34:06 -- accel/accel.sh@21 -- # val=software 00:07:21.561 22:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.561 22:34:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # IFS=: 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # read -r var val 00:07:21.561 22:34:06 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:21.561 22:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # IFS=: 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # read -r var val 00:07:21.561 22:34:06 -- accel/accel.sh@21 -- # val=32 00:07:21.561 22:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # IFS=: 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # read -r var val 00:07:21.561 22:34:06 -- accel/accel.sh@21 -- # val=32 00:07:21.561 22:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # IFS=: 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # read -r var val 00:07:21.561 22:34:06 -- accel/accel.sh@21 -- # val=2 00:07:21.561 22:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # IFS=: 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # read -r var val 00:07:21.561 22:34:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:21.561 22:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # IFS=: 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # read -r var val 00:07:21.561 22:34:06 -- accel/accel.sh@21 -- # val=Yes 00:07:21.561 22:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # IFS=: 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # read -r var val 00:07:21.561 22:34:06 -- accel/accel.sh@21 -- # val= 00:07:21.561 22:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # IFS=: 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # read -r var val 00:07:21.561 22:34:06 -- accel/accel.sh@21 -- # val= 00:07:21.561 22:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # IFS=: 00:07:21.561 22:34:06 -- accel/accel.sh@20 -- # read -r var val 00:07:22.501 22:34:07 -- accel/accel.sh@21 -- # val= 00:07:22.501 22:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.501 22:34:07 -- accel/accel.sh@20 -- # IFS=: 00:07:22.501 22:34:07 -- accel/accel.sh@20 -- # read -r var val 00:07:22.501 22:34:07 -- accel/accel.sh@21 -- # val= 00:07:22.501 22:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.501 22:34:07 -- accel/accel.sh@20 -- # IFS=: 00:07:22.501 22:34:07 -- accel/accel.sh@20 -- # read -r var val 00:07:22.501 22:34:07 -- accel/accel.sh@21 -- # val= 00:07:22.501 22:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.501 22:34:07 -- accel/accel.sh@20 -- # IFS=: 00:07:22.501 22:34:07 -- accel/accel.sh@20 -- # read -r var val 00:07:22.501 22:34:07 -- accel/accel.sh@21 -- # val= 00:07:22.501 22:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.501 22:34:07 -- accel/accel.sh@20 -- # IFS=: 00:07:22.501 22:34:07 -- accel/accel.sh@20 -- # read -r var val 00:07:22.501 22:34:07 -- accel/accel.sh@21 -- # val= 00:07:22.501 22:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.501 22:34:07 -- accel/accel.sh@20 -- # IFS=: 00:07:22.501 22:34:07 -- accel/accel.sh@20 -- # read -r var val 00:07:22.501 22:34:07 -- accel/accel.sh@21 -- # val= 00:07:22.501 22:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.501 22:34:07 -- accel/accel.sh@20 -- # IFS=: 00:07:22.501 22:34:07 -- accel/accel.sh@20 -- # read -r var val 00:07:22.501 22:34:07 -- accel/accel.sh@21 -- # val= 00:07:22.501 22:34:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.501 22:34:07 -- accel/accel.sh@20 -- # IFS=: 00:07:22.501 22:34:07 -- accel/accel.sh@20 -- # read -r var val 00:07:22.501 22:34:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.501 22:34:07 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:22.501 22:34:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.501 00:07:22.501 real 0m2.639s 00:07:22.501 user 0m2.441s 00:07:22.501 sys 0m0.205s 00:07:22.501 22:34:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.501 22:34:07 -- common/autotest_common.sh@10 -- # set +x 00:07:22.501 ************************************ 00:07:22.501 END TEST accel_deomp_full_mthread 00:07:22.501 ************************************ 00:07:22.775 22:34:07 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:22.775 22:34:07 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:22.775 22:34:07 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:22.775 22:34:07 -- accel/accel.sh@129 -- # build_accel_config 00:07:22.775 22:34:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.775 22:34:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.775 22:34:07 -- common/autotest_common.sh@10 -- # set +x 00:07:22.775 22:34:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.775 22:34:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.775 22:34:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.775 22:34:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.775 22:34:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.775 22:34:07 -- accel/accel.sh@42 -- # jq -r . 00:07:22.775 ************************************ 00:07:22.775 START TEST accel_dif_functional_tests 00:07:22.775 ************************************ 00:07:22.775 22:34:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:22.775 [2024-04-15 22:34:07.369725] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:22.775 [2024-04-15 22:34:07.369781] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid916870 ] 00:07:22.775 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.775 [2024-04-15 22:34:07.435327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.775 [2024-04-15 22:34:07.500571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.775 [2024-04-15 22:34:07.500646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.775 [2024-04-15 22:34:07.500837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.776 00:07:22.776 00:07:22.776 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.776 http://cunit.sourceforge.net/ 00:07:22.776 00:07:22.776 00:07:22.776 Suite: accel_dif 00:07:22.776 Test: verify: DIF generated, GUARD check ...passed 00:07:22.776 Test: verify: DIF generated, APPTAG check ...passed 00:07:22.776 Test: verify: DIF generated, REFTAG check ...passed 00:07:22.776 Test: verify: DIF not generated, GUARD check ...[2024-04-15 22:34:07.556168] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:22.776 [2024-04-15 22:34:07.556206] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:22.776 passed 00:07:22.776 Test: verify: DIF not generated, APPTAG check ...[2024-04-15 22:34:07.556240] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:22.776 [2024-04-15 22:34:07.556255] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:22.776 passed 00:07:22.776 Test: verify: DIF not generated, REFTAG check ...[2024-04-15 22:34:07.556271] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:22.776 [2024-04-15 22:34:07.556285] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:22.776 passed 00:07:22.776 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:22.776 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-15 22:34:07.556328] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:22.776 passed 00:07:22.776 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:22.776 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:22.776 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:22.776 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-15 22:34:07.556441] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:22.776 passed 00:07:22.776 Test: generate copy: DIF generated, GUARD check ...passed 00:07:22.776 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:22.776 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:22.776 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:22.776 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:22.776 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:22.776 Test: generate copy: iovecs-len validate ...[2024-04-15 22:34:07.556642] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:22.776 passed 00:07:22.776 Test: generate copy: buffer alignment validate ...passed 00:07:22.776 00:07:22.776 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.776 suites 1 1 n/a 0 0 00:07:22.776 tests 20 20 20 0 0 00:07:22.776 asserts 204 204 204 0 n/a 00:07:22.776 00:07:22.776 Elapsed time = 0.002 seconds 00:07:23.064 00:07:23.064 real 0m0.348s 00:07:23.064 user 0m0.482s 00:07:23.064 sys 0m0.135s 00:07:23.064 22:34:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.064 22:34:07 -- common/autotest_common.sh@10 -- # set +x 00:07:23.064 ************************************ 00:07:23.064 END TEST accel_dif_functional_tests 00:07:23.064 ************************************ 00:07:23.064 00:07:23.064 real 0m55.048s 00:07:23.064 user 1m3.355s 00:07:23.064 sys 0m5.894s 00:07:23.064 22:34:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.064 22:34:07 -- common/autotest_common.sh@10 -- # set +x 00:07:23.064 ************************************ 00:07:23.064 END TEST accel 00:07:23.064 ************************************ 00:07:23.064 22:34:07 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:23.064 22:34:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:23.064 22:34:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.064 22:34:07 -- common/autotest_common.sh@10 -- # set +x 00:07:23.064 ************************************ 00:07:23.064 START TEST accel_rpc 00:07:23.064 ************************************ 00:07:23.064 22:34:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:23.064 * Looking for test storage... 00:07:23.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:23.064 22:34:07 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:23.064 22:34:07 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=916942 00:07:23.064 22:34:07 -- accel/accel_rpc.sh@15 -- # waitforlisten 916942 00:07:23.064 22:34:07 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:23.064 22:34:07 -- common/autotest_common.sh@819 -- # '[' -z 916942 ']' 00:07:23.064 22:34:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.064 22:34:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:23.064 22:34:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.064 22:34:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:23.064 22:34:07 -- common/autotest_common.sh@10 -- # set +x 00:07:23.326 [2024-04-15 22:34:07.900522] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:23.326 [2024-04-15 22:34:07.900602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid916942 ] 00:07:23.326 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.326 [2024-04-15 22:34:07.972269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.326 [2024-04-15 22:34:08.043871] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:23.326 [2024-04-15 22:34:08.044012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.896 22:34:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:23.896 22:34:08 -- common/autotest_common.sh@852 -- # return 0 00:07:23.896 22:34:08 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:23.896 22:34:08 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:23.896 22:34:08 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:23.896 22:34:08 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:23.896 22:34:08 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:23.896 22:34:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:23.896 22:34:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.896 22:34:08 -- common/autotest_common.sh@10 -- # set +x 00:07:23.896 ************************************ 00:07:23.896 START TEST accel_assign_opcode 00:07:23.896 ************************************ 00:07:23.896 22:34:08 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:23.896 22:34:08 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:23.896 22:34:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.896 22:34:08 -- common/autotest_common.sh@10 -- # set +x 00:07:23.896 [2024-04-15 22:34:08.673829] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:23.896 22:34:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.896 22:34:08 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:23.896 22:34:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.896 22:34:08 -- common/autotest_common.sh@10 -- # set +x 00:07:23.896 [2024-04-15 22:34:08.685855] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:23.896 22:34:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.896 22:34:08 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:23.896 22:34:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.896 22:34:08 -- common/autotest_common.sh@10 -- # set +x 00:07:24.157 22:34:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:24.157 22:34:08 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:24.157 22:34:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:24.157 22:34:08 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:24.157 22:34:08 -- common/autotest_common.sh@10 -- # set +x 00:07:24.157 22:34:08 -- accel/accel_rpc.sh@42 -- # grep software 00:07:24.157 22:34:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:24.157 software 00:07:24.157 00:07:24.157 real 0m0.213s 00:07:24.157 user 0m0.049s 00:07:24.157 sys 0m0.011s 00:07:24.157 22:34:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.157 22:34:08 -- common/autotest_common.sh@10 -- # set +x 00:07:24.157 ************************************ 00:07:24.157 END TEST accel_assign_opcode 00:07:24.157 ************************************ 00:07:24.157 22:34:08 -- accel/accel_rpc.sh@55 -- # killprocess 916942 00:07:24.157 22:34:08 -- common/autotest_common.sh@926 -- # '[' -z 916942 ']' 00:07:24.157 22:34:08 -- common/autotest_common.sh@930 -- # kill -0 916942 00:07:24.157 22:34:08 -- common/autotest_common.sh@931 -- # uname 00:07:24.157 22:34:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:24.157 22:34:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 916942 00:07:24.416 22:34:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:24.416 22:34:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:24.416 22:34:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 916942' 00:07:24.416 killing process with pid 916942 00:07:24.416 22:34:08 -- common/autotest_common.sh@945 -- # kill 916942 00:07:24.416 22:34:08 -- common/autotest_common.sh@950 -- # wait 916942 00:07:24.416 00:07:24.416 real 0m1.431s 00:07:24.416 user 0m1.485s 00:07:24.416 sys 0m0.405s 00:07:24.416 22:34:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.416 22:34:09 -- common/autotest_common.sh@10 -- # set +x 00:07:24.416 ************************************ 00:07:24.416 END TEST accel_rpc 00:07:24.416 ************************************ 00:07:24.416 22:34:09 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:24.416 22:34:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:24.416 22:34:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.416 22:34:09 -- common/autotest_common.sh@10 -- # set +x 00:07:24.676 ************************************ 00:07:24.676 START TEST app_cmdline 00:07:24.676 ************************************ 00:07:24.676 22:34:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:24.676 * Looking for test storage... 00:07:24.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:24.676 22:34:09 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:24.676 22:34:09 -- app/cmdline.sh@17 -- # spdk_tgt_pid=917346 00:07:24.676 22:34:09 -- app/cmdline.sh@18 -- # waitforlisten 917346 00:07:24.676 22:34:09 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:24.676 22:34:09 -- common/autotest_common.sh@819 -- # '[' -z 917346 ']' 00:07:24.676 22:34:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.676 22:34:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:24.677 22:34:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.677 22:34:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:24.677 22:34:09 -- common/autotest_common.sh@10 -- # set +x 00:07:24.677 [2024-04-15 22:34:09.375000] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:24.677 [2024-04-15 22:34:09.375073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid917346 ] 00:07:24.677 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.677 [2024-04-15 22:34:09.445874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.936 [2024-04-15 22:34:09.518073] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:24.936 [2024-04-15 22:34:09.518202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.507 22:34:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:25.507 22:34:10 -- common/autotest_common.sh@852 -- # return 0 00:07:25.507 22:34:10 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:25.507 { 00:07:25.507 "version": "SPDK v24.01.1-pre git sha1 3b33f4333", 00:07:25.507 "fields": { 00:07:25.507 "major": 24, 00:07:25.507 "minor": 1, 00:07:25.507 "patch": 1, 00:07:25.507 "suffix": "-pre", 00:07:25.507 "commit": "3b33f4333" 00:07:25.507 } 00:07:25.507 } 00:07:25.507 22:34:10 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:25.507 22:34:10 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:25.507 22:34:10 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:25.507 22:34:10 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:25.507 22:34:10 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:25.507 22:34:10 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:25.507 22:34:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:25.507 22:34:10 -- app/cmdline.sh@26 -- # sort 00:07:25.507 22:34:10 -- common/autotest_common.sh@10 -- # set +x 00:07:25.507 22:34:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:25.767 22:34:10 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:25.767 22:34:10 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:25.767 22:34:10 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.767 22:34:10 -- common/autotest_common.sh@640 -- # local es=0 00:07:25.767 22:34:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.767 22:34:10 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.767 22:34:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:25.768 22:34:10 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.768 22:34:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:25.768 22:34:10 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.768 22:34:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:25.768 22:34:10 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.768 22:34:10 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:25.768 22:34:10 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.768 request: 00:07:25.768 { 00:07:25.768 "method": "env_dpdk_get_mem_stats", 00:07:25.768 "req_id": 1 00:07:25.768 } 00:07:25.768 Got JSON-RPC error response 00:07:25.768 response: 00:07:25.768 { 00:07:25.768 "code": -32601, 00:07:25.768 "message": "Method not found" 00:07:25.768 } 00:07:25.768 22:34:10 -- common/autotest_common.sh@643 -- # es=1 00:07:25.768 22:34:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:25.768 22:34:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:25.768 22:34:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:25.768 22:34:10 -- app/cmdline.sh@1 -- # killprocess 917346 00:07:25.768 22:34:10 -- common/autotest_common.sh@926 -- # '[' -z 917346 ']' 00:07:25.768 22:34:10 -- common/autotest_common.sh@930 -- # kill -0 917346 00:07:25.768 22:34:10 -- common/autotest_common.sh@931 -- # uname 00:07:25.768 22:34:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:25.768 22:34:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 917346 00:07:25.768 22:34:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:25.768 22:34:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:25.768 22:34:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 917346' 00:07:25.768 killing process with pid 917346 00:07:25.768 22:34:10 -- common/autotest_common.sh@945 -- # kill 917346 00:07:25.768 22:34:10 -- common/autotest_common.sh@950 -- # wait 917346 00:07:26.027 00:07:26.027 real 0m1.527s 00:07:26.027 user 0m1.807s 00:07:26.027 sys 0m0.418s 00:07:26.027 22:34:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.027 22:34:10 -- common/autotest_common.sh@10 -- # set +x 00:07:26.027 ************************************ 00:07:26.027 END TEST app_cmdline 00:07:26.027 ************************************ 00:07:26.027 22:34:10 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:26.027 22:34:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:26.027 22:34:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.027 22:34:10 -- common/autotest_common.sh@10 -- # set +x 00:07:26.027 ************************************ 00:07:26.027 START TEST version 00:07:26.027 ************************************ 00:07:26.027 22:34:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:26.289 * Looking for test storage... 00:07:26.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:26.289 22:34:10 -- app/version.sh@17 -- # get_header_version major 00:07:26.289 22:34:10 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:26.289 22:34:10 -- app/version.sh@14 -- # cut -f2 00:07:26.289 22:34:10 -- app/version.sh@14 -- # tr -d '"' 00:07:26.289 22:34:10 -- app/version.sh@17 -- # major=24 00:07:26.289 22:34:10 -- app/version.sh@18 -- # get_header_version minor 00:07:26.289 22:34:10 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:26.289 22:34:10 -- app/version.sh@14 -- # tr -d '"' 00:07:26.289 22:34:10 -- app/version.sh@14 -- # cut -f2 00:07:26.289 22:34:10 -- app/version.sh@18 -- # minor=1 00:07:26.289 22:34:10 -- app/version.sh@19 -- # get_header_version patch 00:07:26.289 22:34:10 -- app/version.sh@14 -- # tr -d '"' 00:07:26.289 22:34:10 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:26.289 22:34:10 -- app/version.sh@14 -- # cut -f2 00:07:26.289 22:34:10 -- app/version.sh@19 -- # patch=1 00:07:26.289 22:34:10 -- app/version.sh@20 -- # get_header_version suffix 00:07:26.289 22:34:10 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:26.289 22:34:10 -- app/version.sh@14 -- # cut -f2 00:07:26.289 22:34:10 -- app/version.sh@14 -- # tr -d '"' 00:07:26.289 22:34:10 -- app/version.sh@20 -- # suffix=-pre 00:07:26.289 22:34:10 -- app/version.sh@22 -- # version=24.1 00:07:26.289 22:34:10 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:26.289 22:34:10 -- app/version.sh@25 -- # version=24.1.1 00:07:26.289 22:34:10 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:26.289 22:34:10 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:26.289 22:34:10 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:26.289 22:34:10 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:26.289 22:34:10 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:26.289 00:07:26.289 real 0m0.170s 00:07:26.289 user 0m0.090s 00:07:26.289 sys 0m0.111s 00:07:26.289 22:34:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.289 22:34:10 -- common/autotest_common.sh@10 -- # set +x 00:07:26.289 ************************************ 00:07:26.289 END TEST version 00:07:26.289 ************************************ 00:07:26.289 22:34:11 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:26.289 22:34:11 -- spdk/autotest.sh@204 -- # uname -s 00:07:26.289 22:34:11 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:26.289 22:34:11 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:26.289 22:34:11 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:26.289 22:34:11 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:26.289 22:34:11 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:26.289 22:34:11 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:26.289 22:34:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:26.289 22:34:11 -- common/autotest_common.sh@10 -- # set +x 00:07:26.289 22:34:11 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:26.289 22:34:11 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:26.289 22:34:11 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:26.289 22:34:11 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:26.289 22:34:11 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:26.289 22:34:11 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:26.289 22:34:11 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:26.289 22:34:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:26.289 22:34:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.289 22:34:11 -- common/autotest_common.sh@10 -- # set +x 00:07:26.289 ************************************ 00:07:26.289 START TEST nvmf_tcp 00:07:26.289 ************************************ 00:07:26.289 22:34:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:26.550 * Looking for test storage... 00:07:26.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:26.550 22:34:11 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:26.550 22:34:11 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:26.550 22:34:11 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.550 22:34:11 -- nvmf/common.sh@7 -- # uname -s 00:07:26.550 22:34:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.550 22:34:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.550 22:34:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.550 22:34:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.550 22:34:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.550 22:34:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.550 22:34:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.550 22:34:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.550 22:34:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.550 22:34:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.550 22:34:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:26.550 22:34:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:26.550 22:34:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.550 22:34:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.550 22:34:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.550 22:34:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.550 22:34:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.550 22:34:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.550 22:34:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.551 22:34:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.551 22:34:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.551 22:34:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.551 22:34:11 -- paths/export.sh@5 -- # export PATH 00:07:26.551 22:34:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.551 22:34:11 -- nvmf/common.sh@46 -- # : 0 00:07:26.551 22:34:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:26.551 22:34:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:26.551 22:34:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:26.551 22:34:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.551 22:34:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.551 22:34:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:26.551 22:34:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:26.551 22:34:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:26.551 22:34:11 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:26.551 22:34:11 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:26.551 22:34:11 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:26.551 22:34:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:26.551 22:34:11 -- common/autotest_common.sh@10 -- # set +x 00:07:26.551 22:34:11 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:26.551 22:34:11 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:26.551 22:34:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:26.551 22:34:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.551 22:34:11 -- common/autotest_common.sh@10 -- # set +x 00:07:26.551 ************************************ 00:07:26.551 START TEST nvmf_example 00:07:26.551 ************************************ 00:07:26.551 22:34:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:26.551 * Looking for test storage... 00:07:26.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.551 22:34:11 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.551 22:34:11 -- nvmf/common.sh@7 -- # uname -s 00:07:26.551 22:34:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.551 22:34:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.551 22:34:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.551 22:34:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.551 22:34:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.551 22:34:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.551 22:34:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.551 22:34:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.551 22:34:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.551 22:34:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.551 22:34:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:26.551 22:34:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:26.551 22:34:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.551 22:34:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.551 22:34:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.551 22:34:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.551 22:34:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.551 22:34:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.551 22:34:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.551 22:34:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.551 22:34:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.551 22:34:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.551 22:34:11 -- paths/export.sh@5 -- # export PATH 00:07:26.551 22:34:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.551 22:34:11 -- nvmf/common.sh@46 -- # : 0 00:07:26.551 22:34:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:26.551 22:34:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:26.551 22:34:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:26.551 22:34:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.551 22:34:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.551 22:34:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:26.551 22:34:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:26.551 22:34:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:26.551 22:34:11 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:26.551 22:34:11 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:26.551 22:34:11 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:26.551 22:34:11 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:26.551 22:34:11 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:26.551 22:34:11 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:26.551 22:34:11 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:26.551 22:34:11 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:26.551 22:34:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:26.551 22:34:11 -- common/autotest_common.sh@10 -- # set +x 00:07:26.551 22:34:11 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:26.551 22:34:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:26.551 22:34:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.551 22:34:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:26.551 22:34:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:26.551 22:34:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:26.551 22:34:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.551 22:34:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.551 22:34:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.551 22:34:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:26.551 22:34:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:26.551 22:34:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:26.551 22:34:11 -- common/autotest_common.sh@10 -- # set +x 00:07:34.714 22:34:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:34.714 22:34:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:34.714 22:34:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:34.714 22:34:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:34.714 22:34:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:34.714 22:34:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:34.714 22:34:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:34.714 22:34:18 -- nvmf/common.sh@294 -- # net_devs=() 00:07:34.714 22:34:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:34.714 22:34:18 -- nvmf/common.sh@295 -- # e810=() 00:07:34.714 22:34:18 -- nvmf/common.sh@295 -- # local -ga e810 00:07:34.714 22:34:18 -- nvmf/common.sh@296 -- # x722=() 00:07:34.714 22:34:18 -- nvmf/common.sh@296 -- # local -ga x722 00:07:34.714 22:34:18 -- nvmf/common.sh@297 -- # mlx=() 00:07:34.714 22:34:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:34.714 22:34:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.714 22:34:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.714 22:34:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.714 22:34:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.714 22:34:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.714 22:34:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.714 22:34:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.714 22:34:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.714 22:34:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.714 22:34:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.714 22:34:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.714 22:34:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:34.714 22:34:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:34.714 22:34:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:34.714 22:34:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:34.714 22:34:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:34.714 22:34:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:34.714 22:34:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:34.714 22:34:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:34.714 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:34.714 22:34:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:34.714 22:34:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:34.714 22:34:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.714 22:34:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.714 22:34:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:34.714 22:34:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:34.714 22:34:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:34.714 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:34.714 22:34:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:34.714 22:34:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:34.714 22:34:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.714 22:34:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.714 22:34:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:34.714 22:34:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:34.714 22:34:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:34.714 22:34:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:34.714 22:34:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:34.714 22:34:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.714 22:34:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:34.715 22:34:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.715 22:34:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:34.715 Found net devices under 0000:31:00.0: cvl_0_0 00:07:34.715 22:34:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.715 22:34:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:34.715 22:34:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.715 22:34:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:34.715 22:34:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.715 22:34:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:34.715 Found net devices under 0000:31:00.1: cvl_0_1 00:07:34.715 22:34:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.715 22:34:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:34.715 22:34:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:34.715 22:34:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:34.715 22:34:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:34.715 22:34:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:34.715 22:34:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.715 22:34:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.715 22:34:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.715 22:34:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:34.715 22:34:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.715 22:34:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.715 22:34:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:34.715 22:34:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.715 22:34:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.715 22:34:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:34.715 22:34:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:34.715 22:34:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.715 22:34:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.715 22:34:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.715 22:34:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.715 22:34:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:34.715 22:34:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.715 22:34:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.715 22:34:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.715 22:34:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:34.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:07:34.715 00:07:34.715 --- 10.0.0.2 ping statistics --- 00:07:34.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.715 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:07:34.715 22:34:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:07:34.715 00:07:34.715 --- 10.0.0.1 ping statistics --- 00:07:34.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.715 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:07:34.715 22:34:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.715 22:34:19 -- nvmf/common.sh@410 -- # return 0 00:07:34.715 22:34:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:34.715 22:34:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.715 22:34:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:34.715 22:34:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:34.715 22:34:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.715 22:34:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:34.715 22:34:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:34.715 22:34:19 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:34.715 22:34:19 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:34.715 22:34:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:34.715 22:34:19 -- common/autotest_common.sh@10 -- # set +x 00:07:34.715 22:34:19 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:34.715 22:34:19 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:34.715 22:34:19 -- target/nvmf_example.sh@34 -- # nvmfpid=922134 00:07:34.715 22:34:19 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:34.715 22:34:19 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:34.715 22:34:19 -- target/nvmf_example.sh@36 -- # waitforlisten 922134 00:07:34.715 22:34:19 -- common/autotest_common.sh@819 -- # '[' -z 922134 ']' 00:07:34.715 22:34:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.715 22:34:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:34.715 22:34:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.715 22:34:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:34.715 22:34:19 -- common/autotest_common.sh@10 -- # set +x 00:07:34.715 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.662 22:34:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:35.662 22:34:20 -- common/autotest_common.sh@852 -- # return 0 00:07:35.662 22:34:20 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:35.662 22:34:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:35.662 22:34:20 -- common/autotest_common.sh@10 -- # set +x 00:07:35.662 22:34:20 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:35.662 22:34:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.662 22:34:20 -- common/autotest_common.sh@10 -- # set +x 00:07:35.662 22:34:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.662 22:34:20 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:35.662 22:34:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.662 22:34:20 -- common/autotest_common.sh@10 -- # set +x 00:07:35.662 22:34:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.662 22:34:20 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:35.662 22:34:20 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:35.662 22:34:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.662 22:34:20 -- common/autotest_common.sh@10 -- # set +x 00:07:35.662 22:34:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.662 22:34:20 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:35.662 22:34:20 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:35.662 22:34:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.662 22:34:20 -- common/autotest_common.sh@10 -- # set +x 00:07:35.662 22:34:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.662 22:34:20 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.662 22:34:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.662 22:34:20 -- common/autotest_common.sh@10 -- # set +x 00:07:35.662 22:34:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.663 22:34:20 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:35.663 22:34:20 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:35.663 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.899 Initializing NVMe Controllers 00:07:47.899 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:47.899 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:47.899 Initialization complete. Launching workers. 00:07:47.899 ======================================================== 00:07:47.899 Latency(us) 00:07:47.899 Device Information : IOPS MiB/s Average min max 00:07:47.899 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17641.96 68.91 3628.48 817.05 18087.04 00:07:47.899 ======================================================== 00:07:47.899 Total : 17641.96 68.91 3628.48 817.05 18087.04 00:07:47.899 00:07:47.899 22:34:30 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:47.899 22:34:30 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:47.899 22:34:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:47.899 22:34:30 -- nvmf/common.sh@116 -- # sync 00:07:47.899 22:34:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:47.899 22:34:30 -- nvmf/common.sh@119 -- # set +e 00:07:47.899 22:34:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:47.899 22:34:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:47.899 rmmod nvme_tcp 00:07:47.899 rmmod nvme_fabrics 00:07:47.899 rmmod nvme_keyring 00:07:47.899 22:34:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:47.899 22:34:30 -- nvmf/common.sh@123 -- # set -e 00:07:47.900 22:34:30 -- nvmf/common.sh@124 -- # return 0 00:07:47.900 22:34:30 -- nvmf/common.sh@477 -- # '[' -n 922134 ']' 00:07:47.900 22:34:30 -- nvmf/common.sh@478 -- # killprocess 922134 00:07:47.900 22:34:30 -- common/autotest_common.sh@926 -- # '[' -z 922134 ']' 00:07:47.900 22:34:30 -- common/autotest_common.sh@930 -- # kill -0 922134 00:07:47.900 22:34:30 -- common/autotest_common.sh@931 -- # uname 00:07:47.900 22:34:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:47.900 22:34:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 922134 00:07:47.900 22:34:30 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:07:47.900 22:34:30 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:07:47.900 22:34:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 922134' 00:07:47.900 killing process with pid 922134 00:07:47.900 22:34:30 -- common/autotest_common.sh@945 -- # kill 922134 00:07:47.900 22:34:30 -- common/autotest_common.sh@950 -- # wait 922134 00:07:47.900 nvmf threads initialize successfully 00:07:47.900 bdev subsystem init successfully 00:07:47.900 created a nvmf target service 00:07:47.900 create targets's poll groups done 00:07:47.900 all subsystems of target started 00:07:47.900 nvmf target is running 00:07:47.900 all subsystems of target stopped 00:07:47.900 destroy targets's poll groups done 00:07:47.900 destroyed the nvmf target service 00:07:47.900 bdev subsystem finish successfully 00:07:47.900 nvmf threads destroy successfully 00:07:47.900 22:34:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:47.900 22:34:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:47.900 22:34:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:47.900 22:34:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:47.900 22:34:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:47.900 22:34:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.900 22:34:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.900 22:34:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.160 22:34:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:48.160 22:34:32 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:48.160 22:34:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:48.160 22:34:32 -- common/autotest_common.sh@10 -- # set +x 00:07:48.160 00:07:48.160 real 0m21.732s 00:07:48.160 user 0m46.882s 00:07:48.160 sys 0m6.960s 00:07:48.160 22:34:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.160 22:34:32 -- common/autotest_common.sh@10 -- # set +x 00:07:48.160 ************************************ 00:07:48.160 END TEST nvmf_example 00:07:48.160 ************************************ 00:07:48.423 22:34:32 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:48.423 22:34:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:48.423 22:34:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:48.423 22:34:32 -- common/autotest_common.sh@10 -- # set +x 00:07:48.423 ************************************ 00:07:48.423 START TEST nvmf_filesystem 00:07:48.423 ************************************ 00:07:48.423 22:34:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:48.423 * Looking for test storage... 00:07:48.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.423 22:34:33 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:48.423 22:34:33 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:48.423 22:34:33 -- common/autotest_common.sh@34 -- # set -e 00:07:48.423 22:34:33 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:48.423 22:34:33 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:48.423 22:34:33 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:48.423 22:34:33 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:48.423 22:34:33 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:48.423 22:34:33 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:48.423 22:34:33 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:48.423 22:34:33 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:48.423 22:34:33 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:48.423 22:34:33 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:48.423 22:34:33 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:48.423 22:34:33 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:48.423 22:34:33 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:48.423 22:34:33 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:48.423 22:34:33 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:48.423 22:34:33 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:48.423 22:34:33 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:48.423 22:34:33 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:48.423 22:34:33 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:48.423 22:34:33 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:48.423 22:34:33 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:48.423 22:34:33 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:48.423 22:34:33 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:48.423 22:34:33 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:48.423 22:34:33 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:48.423 22:34:33 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:48.423 22:34:33 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:48.423 22:34:33 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:48.423 22:34:33 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:48.423 22:34:33 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:48.423 22:34:33 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:48.423 22:34:33 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:48.423 22:34:33 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:48.423 22:34:33 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:48.423 22:34:33 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:48.423 22:34:33 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:48.423 22:34:33 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:48.423 22:34:33 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:48.423 22:34:33 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:48.423 22:34:33 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:48.423 22:34:33 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:48.423 22:34:33 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:48.423 22:34:33 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:48.423 22:34:33 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:48.423 22:34:33 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:48.423 22:34:33 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:48.423 22:34:33 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:48.423 22:34:33 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:48.423 22:34:33 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:48.423 22:34:33 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:48.423 22:34:33 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:48.423 22:34:33 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:48.423 22:34:33 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:48.423 22:34:33 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:48.423 22:34:33 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:48.423 22:34:33 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:48.423 22:34:33 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:48.423 22:34:33 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:48.423 22:34:33 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:48.423 22:34:33 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:48.423 22:34:33 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:48.423 22:34:33 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:48.423 22:34:33 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:48.423 22:34:33 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:07:48.423 22:34:33 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:07:48.423 22:34:33 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:48.423 22:34:33 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:48.423 22:34:33 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:48.423 22:34:33 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:48.423 22:34:33 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:48.423 22:34:33 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:48.423 22:34:33 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:48.423 22:34:33 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:48.423 22:34:33 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:48.424 22:34:33 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:48.424 22:34:33 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:48.424 22:34:33 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:48.424 22:34:33 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:48.424 22:34:33 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:48.424 22:34:33 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:48.424 22:34:33 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:48.424 22:34:33 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:48.424 22:34:33 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:48.424 22:34:33 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:48.424 22:34:33 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:48.424 22:34:33 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:48.424 22:34:33 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:48.424 22:34:33 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:48.424 22:34:33 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:48.424 22:34:33 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:48.424 22:34:33 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:48.424 22:34:33 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:48.424 22:34:33 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:48.424 22:34:33 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:48.424 22:34:33 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:48.424 22:34:33 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:48.424 22:34:33 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:48.424 22:34:33 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:48.424 22:34:33 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:48.424 #define SPDK_CONFIG_H 00:07:48.424 #define SPDK_CONFIG_APPS 1 00:07:48.424 #define SPDK_CONFIG_ARCH native 00:07:48.424 #undef SPDK_CONFIG_ASAN 00:07:48.424 #undef SPDK_CONFIG_AVAHI 00:07:48.424 #undef SPDK_CONFIG_CET 00:07:48.424 #define SPDK_CONFIG_COVERAGE 1 00:07:48.424 #define SPDK_CONFIG_CROSS_PREFIX 00:07:48.424 #undef SPDK_CONFIG_CRYPTO 00:07:48.424 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:48.424 #undef SPDK_CONFIG_CUSTOMOCF 00:07:48.424 #undef SPDK_CONFIG_DAOS 00:07:48.424 #define SPDK_CONFIG_DAOS_DIR 00:07:48.424 #define SPDK_CONFIG_DEBUG 1 00:07:48.424 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:48.424 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:48.424 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:48.424 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:48.424 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:48.424 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:48.424 #define SPDK_CONFIG_EXAMPLES 1 00:07:48.424 #undef SPDK_CONFIG_FC 00:07:48.424 #define SPDK_CONFIG_FC_PATH 00:07:48.424 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:48.424 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:48.424 #undef SPDK_CONFIG_FUSE 00:07:48.424 #undef SPDK_CONFIG_FUZZER 00:07:48.424 #define SPDK_CONFIG_FUZZER_LIB 00:07:48.424 #undef SPDK_CONFIG_GOLANG 00:07:48.424 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:48.424 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:48.424 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:48.424 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:48.424 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:48.424 #define SPDK_CONFIG_IDXD 1 00:07:48.424 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:48.424 #undef SPDK_CONFIG_IPSEC_MB 00:07:48.424 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:48.424 #define SPDK_CONFIG_ISAL 1 00:07:48.424 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:48.424 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:48.424 #define SPDK_CONFIG_LIBDIR 00:07:48.424 #undef SPDK_CONFIG_LTO 00:07:48.424 #define SPDK_CONFIG_MAX_LCORES 00:07:48.424 #define SPDK_CONFIG_NVME_CUSE 1 00:07:48.424 #undef SPDK_CONFIG_OCF 00:07:48.424 #define SPDK_CONFIG_OCF_PATH 00:07:48.424 #define SPDK_CONFIG_OPENSSL_PATH 00:07:48.424 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:48.424 #undef SPDK_CONFIG_PGO_USE 00:07:48.424 #define SPDK_CONFIG_PREFIX /usr/local 00:07:48.424 #undef SPDK_CONFIG_RAID5F 00:07:48.424 #undef SPDK_CONFIG_RBD 00:07:48.424 #define SPDK_CONFIG_RDMA 1 00:07:48.424 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:48.424 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:48.424 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:48.424 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:48.424 #define SPDK_CONFIG_SHARED 1 00:07:48.424 #undef SPDK_CONFIG_SMA 00:07:48.424 #define SPDK_CONFIG_TESTS 1 00:07:48.424 #undef SPDK_CONFIG_TSAN 00:07:48.424 #define SPDK_CONFIG_UBLK 1 00:07:48.424 #define SPDK_CONFIG_UBSAN 1 00:07:48.424 #undef SPDK_CONFIG_UNIT_TESTS 00:07:48.424 #undef SPDK_CONFIG_URING 00:07:48.424 #define SPDK_CONFIG_URING_PATH 00:07:48.424 #undef SPDK_CONFIG_URING_ZNS 00:07:48.424 #undef SPDK_CONFIG_USDT 00:07:48.424 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:48.424 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:48.424 #undef SPDK_CONFIG_VFIO_USER 00:07:48.424 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:48.424 #define SPDK_CONFIG_VHOST 1 00:07:48.424 #define SPDK_CONFIG_VIRTIO 1 00:07:48.424 #undef SPDK_CONFIG_VTUNE 00:07:48.424 #define SPDK_CONFIG_VTUNE_DIR 00:07:48.424 #define SPDK_CONFIG_WERROR 1 00:07:48.424 #define SPDK_CONFIG_WPDK_DIR 00:07:48.424 #undef SPDK_CONFIG_XNVME 00:07:48.424 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:48.424 22:34:33 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:48.424 22:34:33 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.424 22:34:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.424 22:34:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.424 22:34:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.424 22:34:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.424 22:34:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.424 22:34:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.424 22:34:33 -- paths/export.sh@5 -- # export PATH 00:07:48.424 22:34:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.424 22:34:33 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:48.424 22:34:33 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:48.424 22:34:33 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:48.424 22:34:33 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:48.424 22:34:33 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:48.424 22:34:33 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:48.424 22:34:33 -- pm/common@16 -- # TEST_TAG=N/A 00:07:48.424 22:34:33 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:48.424 22:34:33 -- common/autotest_common.sh@52 -- # : 1 00:07:48.424 22:34:33 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:48.424 22:34:33 -- common/autotest_common.sh@56 -- # : 0 00:07:48.424 22:34:33 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:48.424 22:34:33 -- common/autotest_common.sh@58 -- # : 0 00:07:48.424 22:34:33 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:48.424 22:34:33 -- common/autotest_common.sh@60 -- # : 1 00:07:48.424 22:34:33 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:48.424 22:34:33 -- common/autotest_common.sh@62 -- # : 0 00:07:48.424 22:34:33 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:48.424 22:34:33 -- common/autotest_common.sh@64 -- # : 00:07:48.424 22:34:33 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:48.424 22:34:33 -- common/autotest_common.sh@66 -- # : 0 00:07:48.424 22:34:33 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:48.424 22:34:33 -- common/autotest_common.sh@68 -- # : 0 00:07:48.424 22:34:33 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:48.424 22:34:33 -- common/autotest_common.sh@70 -- # : 0 00:07:48.424 22:34:33 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:48.425 22:34:33 -- common/autotest_common.sh@72 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:48.425 22:34:33 -- common/autotest_common.sh@74 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:48.425 22:34:33 -- common/autotest_common.sh@76 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:48.425 22:34:33 -- common/autotest_common.sh@78 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:48.425 22:34:33 -- common/autotest_common.sh@80 -- # : 1 00:07:48.425 22:34:33 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:48.425 22:34:33 -- common/autotest_common.sh@82 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:48.425 22:34:33 -- common/autotest_common.sh@84 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:48.425 22:34:33 -- common/autotest_common.sh@86 -- # : 1 00:07:48.425 22:34:33 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:48.425 22:34:33 -- common/autotest_common.sh@88 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:48.425 22:34:33 -- common/autotest_common.sh@90 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:48.425 22:34:33 -- common/autotest_common.sh@92 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:48.425 22:34:33 -- common/autotest_common.sh@94 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:48.425 22:34:33 -- common/autotest_common.sh@96 -- # : tcp 00:07:48.425 22:34:33 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:48.425 22:34:33 -- common/autotest_common.sh@98 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:48.425 22:34:33 -- common/autotest_common.sh@100 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:48.425 22:34:33 -- common/autotest_common.sh@102 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:48.425 22:34:33 -- common/autotest_common.sh@104 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:48.425 22:34:33 -- common/autotest_common.sh@106 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:48.425 22:34:33 -- common/autotest_common.sh@108 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:48.425 22:34:33 -- common/autotest_common.sh@110 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:48.425 22:34:33 -- common/autotest_common.sh@112 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:48.425 22:34:33 -- common/autotest_common.sh@114 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:48.425 22:34:33 -- common/autotest_common.sh@116 -- # : 1 00:07:48.425 22:34:33 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:48.425 22:34:33 -- common/autotest_common.sh@118 -- # : 00:07:48.425 22:34:33 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:48.425 22:34:33 -- common/autotest_common.sh@120 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:48.425 22:34:33 -- common/autotest_common.sh@122 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:48.425 22:34:33 -- common/autotest_common.sh@124 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:48.425 22:34:33 -- common/autotest_common.sh@126 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:48.425 22:34:33 -- common/autotest_common.sh@128 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:48.425 22:34:33 -- common/autotest_common.sh@130 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:48.425 22:34:33 -- common/autotest_common.sh@132 -- # : 00:07:48.425 22:34:33 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:48.425 22:34:33 -- common/autotest_common.sh@134 -- # : true 00:07:48.425 22:34:33 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:48.425 22:34:33 -- common/autotest_common.sh@136 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:48.425 22:34:33 -- common/autotest_common.sh@138 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:48.425 22:34:33 -- common/autotest_common.sh@140 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:48.425 22:34:33 -- common/autotest_common.sh@142 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:48.425 22:34:33 -- common/autotest_common.sh@144 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:48.425 22:34:33 -- common/autotest_common.sh@146 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:48.425 22:34:33 -- common/autotest_common.sh@148 -- # : e810 00:07:48.425 22:34:33 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:48.425 22:34:33 -- common/autotest_common.sh@150 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:48.425 22:34:33 -- common/autotest_common.sh@152 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:48.425 22:34:33 -- common/autotest_common.sh@154 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:48.425 22:34:33 -- common/autotest_common.sh@156 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:48.425 22:34:33 -- common/autotest_common.sh@158 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:48.425 22:34:33 -- common/autotest_common.sh@160 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:48.425 22:34:33 -- common/autotest_common.sh@163 -- # : 00:07:48.425 22:34:33 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:48.425 22:34:33 -- common/autotest_common.sh@165 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:48.425 22:34:33 -- common/autotest_common.sh@167 -- # : 0 00:07:48.425 22:34:33 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:48.425 22:34:33 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:48.425 22:34:33 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:48.425 22:34:33 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:48.425 22:34:33 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:48.425 22:34:33 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:48.425 22:34:33 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:48.425 22:34:33 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:48.425 22:34:33 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:48.425 22:34:33 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:48.425 22:34:33 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:48.425 22:34:33 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:48.425 22:34:33 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:48.425 22:34:33 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:48.425 22:34:33 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:48.425 22:34:33 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:48.425 22:34:33 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:48.425 22:34:33 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:48.425 22:34:33 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:48.426 22:34:33 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:48.426 22:34:33 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:48.426 22:34:33 -- common/autotest_common.sh@196 -- # cat 00:07:48.426 22:34:33 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:48.426 22:34:33 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:48.426 22:34:33 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:48.426 22:34:33 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:48.426 22:34:33 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:48.426 22:34:33 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:48.426 22:34:33 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:48.426 22:34:33 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:48.426 22:34:33 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:48.426 22:34:33 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:48.426 22:34:33 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:48.426 22:34:33 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:48.426 22:34:33 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:48.426 22:34:33 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:48.426 22:34:33 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:48.426 22:34:33 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:48.426 22:34:33 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:48.426 22:34:33 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:48.426 22:34:33 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:48.426 22:34:33 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:07:48.426 22:34:33 -- common/autotest_common.sh@249 -- # export valgrind= 00:07:48.426 22:34:33 -- common/autotest_common.sh@249 -- # valgrind= 00:07:48.426 22:34:33 -- common/autotest_common.sh@255 -- # uname -s 00:07:48.426 22:34:33 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:07:48.426 22:34:33 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:07:48.426 22:34:33 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:07:48.426 22:34:33 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:07:48.426 22:34:33 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:48.426 22:34:33 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:48.426 22:34:33 -- common/autotest_common.sh@265 -- # MAKE=make 00:07:48.426 22:34:33 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j144 00:07:48.426 22:34:33 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:07:48.426 22:34:33 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:07:48.426 22:34:33 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:48.426 22:34:33 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:07:48.426 22:34:33 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:07:48.426 22:34:33 -- common/autotest_common.sh@291 -- # for i in "$@" 00:07:48.426 22:34:33 -- common/autotest_common.sh@292 -- # case "$i" in 00:07:48.426 22:34:33 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:07:48.426 22:34:33 -- common/autotest_common.sh@309 -- # [[ -z 924966 ]] 00:07:48.426 22:34:33 -- common/autotest_common.sh@309 -- # kill -0 924966 00:07:48.426 22:34:33 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:07:48.426 22:34:33 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:07:48.426 22:34:33 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:07:48.426 22:34:33 -- common/autotest_common.sh@322 -- # local mount target_dir 00:07:48.426 22:34:33 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:07:48.426 22:34:33 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:07:48.426 22:34:33 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:07:48.426 22:34:33 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:07:48.426 22:34:33 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.59N4FF 00:07:48.426 22:34:33 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:48.426 22:34:33 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:07:48.426 22:34:33 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:07:48.426 22:34:33 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.59N4FF/tests/target /tmp/spdk.59N4FF 00:07:48.426 22:34:33 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:07:48.426 22:34:33 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:48.426 22:34:33 -- common/autotest_common.sh@318 -- # df -T 00:07:48.426 22:34:33 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:07:48.426 22:34:33 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:07:48.426 22:34:33 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:07:48.426 22:34:33 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:07:48.426 22:34:33 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:07:48.426 22:34:33 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:07:48.426 22:34:33 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:48.426 22:34:33 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:07:48.426 22:34:33 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:07:48.426 22:34:33 -- common/autotest_common.sh@353 -- # avails["$mount"]=120016019456 00:07:48.426 22:34:33 -- common/autotest_common.sh@353 -- # sizes["$mount"]=134654541824 00:07:48.426 22:34:33 -- common/autotest_common.sh@354 -- # uses["$mount"]=14638522368 00:07:48.426 22:34:33 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:48.426 22:34:33 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:48.426 22:34:33 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:48.426 22:34:33 -- common/autotest_common.sh@353 -- # avails["$mount"]=67273752576 00:07:48.426 22:34:33 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67327270912 00:07:48.426 22:34:33 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:07:48.426 22:34:33 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:48.426 22:34:33 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:48.426 22:34:33 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:48.426 22:34:33 -- common/autotest_common.sh@353 -- # avails["$mount"]=26920968192 00:07:48.426 22:34:33 -- common/autotest_common.sh@353 -- # sizes["$mount"]=26930909184 00:07:48.426 22:34:33 -- common/autotest_common.sh@354 -- # uses["$mount"]=9940992 00:07:48.426 22:34:33 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:48.426 22:34:33 -- common/autotest_common.sh@352 -- # mounts["$mount"]=efivarfs 00:07:48.426 22:34:33 -- common/autotest_common.sh@352 -- # fss["$mount"]=efivarfs 00:07:48.426 22:34:33 -- common/autotest_common.sh@353 -- # avails["$mount"]=193536 00:07:48.426 22:34:33 -- common/autotest_common.sh@353 -- # sizes["$mount"]=507904 00:07:48.426 22:34:33 -- common/autotest_common.sh@354 -- # uses["$mount"]=310272 00:07:48.426 22:34:33 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:48.426 22:34:33 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:48.426 22:34:33 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:48.426 22:34:33 -- common/autotest_common.sh@353 -- # avails["$mount"]=67326382080 00:07:48.426 22:34:33 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67327270912 00:07:48.426 22:34:33 -- common/autotest_common.sh@354 -- # uses["$mount"]=888832 00:07:48.426 22:34:33 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:48.426 22:34:33 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:48.426 22:34:33 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:48.426 22:34:33 -- common/autotest_common.sh@353 -- # avails["$mount"]=13465448448 00:07:48.426 22:34:33 -- common/autotest_common.sh@353 -- # sizes["$mount"]=13465452544 00:07:48.426 22:34:33 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:07:48.426 22:34:33 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:48.426 22:34:33 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:07:48.426 * Looking for test storage... 00:07:48.426 22:34:33 -- common/autotest_common.sh@359 -- # local target_space new_size 00:07:48.426 22:34:33 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:07:48.426 22:34:33 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.426 22:34:33 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:48.426 22:34:33 -- common/autotest_common.sh@363 -- # mount=/ 00:07:48.426 22:34:33 -- common/autotest_common.sh@365 -- # target_space=120016019456 00:07:48.426 22:34:33 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:07:48.426 22:34:33 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:07:48.426 22:34:33 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:07:48.426 22:34:33 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:07:48.426 22:34:33 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:07:48.426 22:34:33 -- common/autotest_common.sh@372 -- # new_size=16853114880 00:07:48.426 22:34:33 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:48.427 22:34:33 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.427 22:34:33 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.427 22:34:33 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.427 22:34:33 -- common/autotest_common.sh@380 -- # return 0 00:07:48.427 22:34:33 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:07:48.427 22:34:33 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:07:48.427 22:34:33 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:48.427 22:34:33 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:48.427 22:34:33 -- common/autotest_common.sh@1672 -- # true 00:07:48.427 22:34:33 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:07:48.427 22:34:33 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:48.427 22:34:33 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:48.427 22:34:33 -- common/autotest_common.sh@27 -- # exec 00:07:48.427 22:34:33 -- common/autotest_common.sh@29 -- # exec 00:07:48.427 22:34:33 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:48.427 22:34:33 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:48.427 22:34:33 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:48.427 22:34:33 -- common/autotest_common.sh@18 -- # set -x 00:07:48.427 22:34:33 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.427 22:34:33 -- nvmf/common.sh@7 -- # uname -s 00:07:48.427 22:34:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.427 22:34:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.427 22:34:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.427 22:34:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.427 22:34:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.427 22:34:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.427 22:34:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.427 22:34:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.427 22:34:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.427 22:34:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.687 22:34:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:48.687 22:34:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:48.687 22:34:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.687 22:34:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.687 22:34:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.687 22:34:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.687 22:34:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.687 22:34:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.687 22:34:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.688 22:34:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.688 22:34:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.688 22:34:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.688 22:34:33 -- paths/export.sh@5 -- # export PATH 00:07:48.688 22:34:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.688 22:34:33 -- nvmf/common.sh@46 -- # : 0 00:07:48.688 22:34:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:48.688 22:34:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:48.688 22:34:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:48.688 22:34:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.688 22:34:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.688 22:34:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:48.688 22:34:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:48.688 22:34:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:48.688 22:34:33 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:48.688 22:34:33 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:48.688 22:34:33 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:48.688 22:34:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:48.688 22:34:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.688 22:34:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:48.688 22:34:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:48.688 22:34:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:48.688 22:34:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.688 22:34:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.688 22:34:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.688 22:34:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:48.688 22:34:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:48.688 22:34:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:48.688 22:34:33 -- common/autotest_common.sh@10 -- # set +x 00:07:56.829 22:34:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:56.829 22:34:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:56.829 22:34:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:56.829 22:34:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:56.829 22:34:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:56.829 22:34:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:56.829 22:34:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:56.829 22:34:40 -- nvmf/common.sh@294 -- # net_devs=() 00:07:56.829 22:34:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:56.829 22:34:40 -- nvmf/common.sh@295 -- # e810=() 00:07:56.829 22:34:40 -- nvmf/common.sh@295 -- # local -ga e810 00:07:56.829 22:34:40 -- nvmf/common.sh@296 -- # x722=() 00:07:56.829 22:34:40 -- nvmf/common.sh@296 -- # local -ga x722 00:07:56.829 22:34:40 -- nvmf/common.sh@297 -- # mlx=() 00:07:56.829 22:34:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:56.829 22:34:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.829 22:34:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.829 22:34:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.829 22:34:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.829 22:34:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.829 22:34:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.829 22:34:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.829 22:34:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.829 22:34:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.829 22:34:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.829 22:34:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.829 22:34:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:56.829 22:34:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:56.829 22:34:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:56.829 22:34:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:56.829 22:34:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:56.829 22:34:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:56.829 22:34:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:56.829 22:34:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:56.829 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:56.829 22:34:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:56.829 22:34:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:56.829 22:34:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.829 22:34:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.829 22:34:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:56.829 22:34:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:56.829 22:34:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:56.829 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:56.829 22:34:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:56.829 22:34:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:56.829 22:34:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.829 22:34:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.829 22:34:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:56.829 22:34:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:56.829 22:34:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:56.829 22:34:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:56.829 22:34:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:56.829 22:34:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.829 22:34:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:56.829 22:34:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.829 22:34:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:56.829 Found net devices under 0000:31:00.0: cvl_0_0 00:07:56.829 22:34:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.829 22:34:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:56.829 22:34:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.829 22:34:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:56.829 22:34:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.829 22:34:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:56.829 Found net devices under 0000:31:00.1: cvl_0_1 00:07:56.829 22:34:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.829 22:34:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:56.829 22:34:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:56.829 22:34:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:56.829 22:34:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:56.829 22:34:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:56.829 22:34:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.829 22:34:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.829 22:34:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.829 22:34:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:56.829 22:34:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.829 22:34:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.829 22:34:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:56.829 22:34:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.829 22:34:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.829 22:34:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:56.829 22:34:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:56.829 22:34:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.829 22:34:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.829 22:34:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.829 22:34:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.829 22:34:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:56.829 22:34:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.829 22:34:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.829 22:34:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.829 22:34:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:56.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.725 ms 00:07:56.829 00:07:56.829 --- 10.0.0.2 ping statistics --- 00:07:56.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.829 rtt min/avg/max/mdev = 0.725/0.725/0.725/0.000 ms 00:07:56.829 22:34:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:07:56.829 00:07:56.829 --- 10.0.0.1 ping statistics --- 00:07:56.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.829 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:07:56.829 22:34:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.829 22:34:41 -- nvmf/common.sh@410 -- # return 0 00:07:56.829 22:34:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:56.829 22:34:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.829 22:34:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:56.829 22:34:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:56.829 22:34:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.829 22:34:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:56.829 22:34:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:56.829 22:34:41 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:56.829 22:34:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:56.829 22:34:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.829 22:34:41 -- common/autotest_common.sh@10 -- # set +x 00:07:56.829 ************************************ 00:07:56.829 START TEST nvmf_filesystem_no_in_capsule 00:07:56.829 ************************************ 00:07:56.829 22:34:41 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:07:56.829 22:34:41 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:56.829 22:34:41 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:56.829 22:34:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:56.829 22:34:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:56.829 22:34:41 -- common/autotest_common.sh@10 -- # set +x 00:07:56.829 22:34:41 -- nvmf/common.sh@469 -- # nvmfpid=929234 00:07:56.829 22:34:41 -- nvmf/common.sh@470 -- # waitforlisten 929234 00:07:56.829 22:34:41 -- common/autotest_common.sh@819 -- # '[' -z 929234 ']' 00:07:56.829 22:34:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:56.829 22:34:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.829 22:34:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:56.829 22:34:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.829 22:34:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:56.829 22:34:41 -- common/autotest_common.sh@10 -- # set +x 00:07:56.829 [2024-04-15 22:34:41.276686] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:56.829 [2024-04-15 22:34:41.276752] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.829 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.829 [2024-04-15 22:34:41.354317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:56.829 [2024-04-15 22:34:41.419162] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:56.829 [2024-04-15 22:34:41.419289] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.830 [2024-04-15 22:34:41.419298] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.830 [2024-04-15 22:34:41.419305] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.830 [2024-04-15 22:34:41.419439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.830 [2024-04-15 22:34:41.419562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.830 [2024-04-15 22:34:41.419663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.830 [2024-04-15 22:34:41.419664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.400 22:34:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:57.400 22:34:42 -- common/autotest_common.sh@852 -- # return 0 00:07:57.400 22:34:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:57.400 22:34:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:57.400 22:34:42 -- common/autotest_common.sh@10 -- # set +x 00:07:57.400 22:34:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.400 22:34:42 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:57.400 22:34:42 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:57.400 22:34:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.400 22:34:42 -- common/autotest_common.sh@10 -- # set +x 00:07:57.400 [2024-04-15 22:34:42.098691] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.400 22:34:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.400 22:34:42 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:57.400 22:34:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.400 22:34:42 -- common/autotest_common.sh@10 -- # set +x 00:07:57.400 Malloc1 00:07:57.400 22:34:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.400 22:34:42 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:57.400 22:34:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.400 22:34:42 -- common/autotest_common.sh@10 -- # set +x 00:07:57.400 22:34:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.400 22:34:42 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:57.400 22:34:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.400 22:34:42 -- common/autotest_common.sh@10 -- # set +x 00:07:57.660 22:34:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.660 22:34:42 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.660 22:34:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.660 22:34:42 -- common/autotest_common.sh@10 -- # set +x 00:07:57.660 [2024-04-15 22:34:42.226858] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.660 22:34:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.660 22:34:42 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:57.660 22:34:42 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:57.660 22:34:42 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:57.660 22:34:42 -- common/autotest_common.sh@1359 -- # local bs 00:07:57.660 22:34:42 -- common/autotest_common.sh@1360 -- # local nb 00:07:57.660 22:34:42 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:57.660 22:34:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.660 22:34:42 -- common/autotest_common.sh@10 -- # set +x 00:07:57.660 22:34:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.660 22:34:42 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:57.660 { 00:07:57.660 "name": "Malloc1", 00:07:57.660 "aliases": [ 00:07:57.660 "34fe5f39-dc3c-43e9-8b8f-9b837cfdee23" 00:07:57.660 ], 00:07:57.660 "product_name": "Malloc disk", 00:07:57.660 "block_size": 512, 00:07:57.660 "num_blocks": 1048576, 00:07:57.660 "uuid": "34fe5f39-dc3c-43e9-8b8f-9b837cfdee23", 00:07:57.660 "assigned_rate_limits": { 00:07:57.660 "rw_ios_per_sec": 0, 00:07:57.660 "rw_mbytes_per_sec": 0, 00:07:57.660 "r_mbytes_per_sec": 0, 00:07:57.660 "w_mbytes_per_sec": 0 00:07:57.660 }, 00:07:57.660 "claimed": true, 00:07:57.660 "claim_type": "exclusive_write", 00:07:57.660 "zoned": false, 00:07:57.660 "supported_io_types": { 00:07:57.660 "read": true, 00:07:57.660 "write": true, 00:07:57.660 "unmap": true, 00:07:57.660 "write_zeroes": true, 00:07:57.660 "flush": true, 00:07:57.660 "reset": true, 00:07:57.660 "compare": false, 00:07:57.660 "compare_and_write": false, 00:07:57.660 "abort": true, 00:07:57.660 "nvme_admin": false, 00:07:57.660 "nvme_io": false 00:07:57.660 }, 00:07:57.660 "memory_domains": [ 00:07:57.660 { 00:07:57.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.660 "dma_device_type": 2 00:07:57.660 } 00:07:57.660 ], 00:07:57.660 "driver_specific": {} 00:07:57.660 } 00:07:57.660 ]' 00:07:57.660 22:34:42 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:57.660 22:34:42 -- common/autotest_common.sh@1362 -- # bs=512 00:07:57.660 22:34:42 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:57.660 22:34:42 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:57.660 22:34:42 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:57.660 22:34:42 -- common/autotest_common.sh@1367 -- # echo 512 00:07:57.660 22:34:42 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:57.660 22:34:42 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:59.613 22:34:43 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:59.613 22:34:43 -- common/autotest_common.sh@1177 -- # local i=0 00:07:59.613 22:34:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:59.613 22:34:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:59.613 22:34:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:01.527 22:34:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:01.527 22:34:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:01.527 22:34:45 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:01.527 22:34:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:01.527 22:34:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:01.527 22:34:45 -- common/autotest_common.sh@1187 -- # return 0 00:08:01.527 22:34:45 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:01.527 22:34:45 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:01.527 22:34:45 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:01.527 22:34:45 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:01.527 22:34:45 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:01.527 22:34:45 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:01.527 22:34:45 -- setup/common.sh@80 -- # echo 536870912 00:08:01.527 22:34:45 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:01.527 22:34:45 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:01.527 22:34:45 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:01.527 22:34:45 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:01.789 22:34:46 -- target/filesystem.sh@69 -- # partprobe 00:08:02.361 22:34:46 -- target/filesystem.sh@70 -- # sleep 1 00:08:03.304 22:34:47 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:03.304 22:34:47 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:03.304 22:34:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:03.304 22:34:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:03.304 22:34:47 -- common/autotest_common.sh@10 -- # set +x 00:08:03.304 ************************************ 00:08:03.304 START TEST filesystem_ext4 00:08:03.304 ************************************ 00:08:03.304 22:34:47 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:03.304 22:34:47 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:03.304 22:34:47 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:03.304 22:34:47 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:03.304 22:34:47 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:03.304 22:34:47 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:03.304 22:34:47 -- common/autotest_common.sh@904 -- # local i=0 00:08:03.304 22:34:47 -- common/autotest_common.sh@905 -- # local force 00:08:03.304 22:34:47 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:03.304 22:34:47 -- common/autotest_common.sh@908 -- # force=-F 00:08:03.304 22:34:47 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:03.304 mke2fs 1.46.5 (30-Dec-2021) 00:08:03.304 Discarding device blocks: 0/522240 done 00:08:03.304 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:03.304 Filesystem UUID: 7daeccec-fa0c-4130-a245-859df6398243 00:08:03.304 Superblock backups stored on blocks: 00:08:03.304 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:03.304 00:08:03.304 Allocating group tables: 0/64 done 00:08:03.304 Writing inode tables: 0/64 done 00:08:04.689 Creating journal (8192 blocks): done 00:08:04.689 Writing superblocks and filesystem accounting information: 0/64 done 00:08:04.689 00:08:04.689 22:34:49 -- common/autotest_common.sh@921 -- # return 0 00:08:04.689 22:34:49 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:04.689 22:34:49 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:04.950 22:34:49 -- target/filesystem.sh@25 -- # sync 00:08:04.950 22:34:49 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:04.950 22:34:49 -- target/filesystem.sh@27 -- # sync 00:08:04.950 22:34:49 -- target/filesystem.sh@29 -- # i=0 00:08:04.950 22:34:49 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:04.950 22:34:49 -- target/filesystem.sh@37 -- # kill -0 929234 00:08:04.950 22:34:49 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:04.950 22:34:49 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:04.950 22:34:49 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:04.950 22:34:49 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:04.950 00:08:04.950 real 0m1.637s 00:08:04.950 user 0m0.030s 00:08:04.950 sys 0m0.067s 00:08:04.950 22:34:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.950 22:34:49 -- common/autotest_common.sh@10 -- # set +x 00:08:04.950 ************************************ 00:08:04.950 END TEST filesystem_ext4 00:08:04.950 ************************************ 00:08:04.950 22:34:49 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:04.950 22:34:49 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:04.950 22:34:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.950 22:34:49 -- common/autotest_common.sh@10 -- # set +x 00:08:04.950 ************************************ 00:08:04.950 START TEST filesystem_btrfs 00:08:04.950 ************************************ 00:08:04.950 22:34:49 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:04.950 22:34:49 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:04.950 22:34:49 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:04.950 22:34:49 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:04.950 22:34:49 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:04.950 22:34:49 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:04.950 22:34:49 -- common/autotest_common.sh@904 -- # local i=0 00:08:04.950 22:34:49 -- common/autotest_common.sh@905 -- # local force 00:08:04.950 22:34:49 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:04.950 22:34:49 -- common/autotest_common.sh@910 -- # force=-f 00:08:04.950 22:34:49 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:05.210 btrfs-progs v6.6.2 00:08:05.210 See https://btrfs.readthedocs.io for more information. 00:08:05.210 00:08:05.210 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:05.210 NOTE: several default settings have changed in version 5.15, please make sure 00:08:05.210 this does not affect your deployments: 00:08:05.210 - DUP for metadata (-m dup) 00:08:05.210 - enabled no-holes (-O no-holes) 00:08:05.210 - enabled free-space-tree (-R free-space-tree) 00:08:05.210 00:08:05.211 Label: (null) 00:08:05.211 UUID: 2b8f72a6-a7c2-474b-a735-6602e46bb093 00:08:05.211 Node size: 16384 00:08:05.211 Sector size: 4096 00:08:05.211 Filesystem size: 510.00MiB 00:08:05.211 Block group profiles: 00:08:05.211 Data: single 8.00MiB 00:08:05.211 Metadata: DUP 32.00MiB 00:08:05.211 System: DUP 8.00MiB 00:08:05.211 SSD detected: yes 00:08:05.211 Zoned device: no 00:08:05.211 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:05.211 Runtime features: free-space-tree 00:08:05.211 Checksum: crc32c 00:08:05.211 Number of devices: 1 00:08:05.211 Devices: 00:08:05.211 ID SIZE PATH 00:08:05.211 1 510.00MiB /dev/nvme0n1p1 00:08:05.211 00:08:05.211 22:34:49 -- common/autotest_common.sh@921 -- # return 0 00:08:05.211 22:34:49 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:05.781 22:34:50 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:05.781 22:34:50 -- target/filesystem.sh@25 -- # sync 00:08:05.781 22:34:50 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:05.781 22:34:50 -- target/filesystem.sh@27 -- # sync 00:08:05.781 22:34:50 -- target/filesystem.sh@29 -- # i=0 00:08:05.781 22:34:50 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:05.781 22:34:50 -- target/filesystem.sh@37 -- # kill -0 929234 00:08:05.781 22:34:50 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:05.781 22:34:50 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:05.781 22:34:50 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:05.781 22:34:50 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:05.781 00:08:05.781 real 0m0.906s 00:08:05.781 user 0m0.032s 00:08:05.781 sys 0m0.127s 00:08:05.781 22:34:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.781 22:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:05.781 ************************************ 00:08:05.781 END TEST filesystem_btrfs 00:08:05.781 ************************************ 00:08:05.781 22:34:50 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:05.781 22:34:50 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:05.781 22:34:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:05.781 22:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:06.042 ************************************ 00:08:06.042 START TEST filesystem_xfs 00:08:06.042 ************************************ 00:08:06.042 22:34:50 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:06.042 22:34:50 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:06.042 22:34:50 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:06.042 22:34:50 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:06.042 22:34:50 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:06.042 22:34:50 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:06.042 22:34:50 -- common/autotest_common.sh@904 -- # local i=0 00:08:06.042 22:34:50 -- common/autotest_common.sh@905 -- # local force 00:08:06.042 22:34:50 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:06.042 22:34:50 -- common/autotest_common.sh@910 -- # force=-f 00:08:06.042 22:34:50 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:06.042 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:06.042 = sectsz=512 attr=2, projid32bit=1 00:08:06.042 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:06.042 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:06.042 data = bsize=4096 blocks=130560, imaxpct=25 00:08:06.042 = sunit=0 swidth=0 blks 00:08:06.042 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:06.042 log =internal log bsize=4096 blocks=16384, version=2 00:08:06.042 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:06.043 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:06.983 Discarding blocks...Done. 00:08:06.983 22:34:51 -- common/autotest_common.sh@921 -- # return 0 00:08:06.983 22:34:51 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.531 22:34:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.531 22:34:54 -- target/filesystem.sh@25 -- # sync 00:08:09.532 22:34:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.532 22:34:54 -- target/filesystem.sh@27 -- # sync 00:08:09.532 22:34:54 -- target/filesystem.sh@29 -- # i=0 00:08:09.532 22:34:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.532 22:34:54 -- target/filesystem.sh@37 -- # kill -0 929234 00:08:09.532 22:34:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.532 22:34:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.792 22:34:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.792 22:34:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.792 00:08:09.792 real 0m3.762s 00:08:09.793 user 0m0.028s 00:08:09.793 sys 0m0.077s 00:08:09.793 22:34:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.793 22:34:54 -- common/autotest_common.sh@10 -- # set +x 00:08:09.793 ************************************ 00:08:09.793 END TEST filesystem_xfs 00:08:09.793 ************************************ 00:08:09.793 22:34:54 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:09.793 22:34:54 -- target/filesystem.sh@93 -- # sync 00:08:10.053 22:34:54 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:10.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.314 22:34:54 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:10.314 22:34:54 -- common/autotest_common.sh@1198 -- # local i=0 00:08:10.314 22:34:54 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:10.314 22:34:54 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.314 22:34:54 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:10.314 22:34:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.314 22:34:54 -- common/autotest_common.sh@1210 -- # return 0 00:08:10.314 22:34:54 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.314 22:34:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:10.314 22:34:54 -- common/autotest_common.sh@10 -- # set +x 00:08:10.315 22:34:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:10.315 22:34:54 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:10.315 22:34:54 -- target/filesystem.sh@101 -- # killprocess 929234 00:08:10.315 22:34:54 -- common/autotest_common.sh@926 -- # '[' -z 929234 ']' 00:08:10.315 22:34:54 -- common/autotest_common.sh@930 -- # kill -0 929234 00:08:10.315 22:34:54 -- common/autotest_common.sh@931 -- # uname 00:08:10.315 22:34:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:10.315 22:34:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 929234 00:08:10.315 22:34:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:10.315 22:34:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:10.315 22:34:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 929234' 00:08:10.315 killing process with pid 929234 00:08:10.315 22:34:55 -- common/autotest_common.sh@945 -- # kill 929234 00:08:10.315 22:34:55 -- common/autotest_common.sh@950 -- # wait 929234 00:08:10.575 22:34:55 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:10.575 00:08:10.575 real 0m14.057s 00:08:10.575 user 0m55.369s 00:08:10.575 sys 0m1.191s 00:08:10.575 22:34:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.575 22:34:55 -- common/autotest_common.sh@10 -- # set +x 00:08:10.575 ************************************ 00:08:10.575 END TEST nvmf_filesystem_no_in_capsule 00:08:10.575 ************************************ 00:08:10.575 22:34:55 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:10.575 22:34:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:10.575 22:34:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.575 22:34:55 -- common/autotest_common.sh@10 -- # set +x 00:08:10.575 ************************************ 00:08:10.575 START TEST nvmf_filesystem_in_capsule 00:08:10.575 ************************************ 00:08:10.575 22:34:55 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:10.575 22:34:55 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:10.575 22:34:55 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:10.575 22:34:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:10.575 22:34:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:10.575 22:34:55 -- common/autotest_common.sh@10 -- # set +x 00:08:10.575 22:34:55 -- nvmf/common.sh@469 -- # nvmfpid=932224 00:08:10.575 22:34:55 -- nvmf/common.sh@470 -- # waitforlisten 932224 00:08:10.575 22:34:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:10.575 22:34:55 -- common/autotest_common.sh@819 -- # '[' -z 932224 ']' 00:08:10.575 22:34:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.575 22:34:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:10.575 22:34:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.575 22:34:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:10.575 22:34:55 -- common/autotest_common.sh@10 -- # set +x 00:08:10.575 [2024-04-15 22:34:55.374668] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:10.575 [2024-04-15 22:34:55.374719] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.836 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.836 [2024-04-15 22:34:55.448075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.836 [2024-04-15 22:34:55.511294] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:10.836 [2024-04-15 22:34:55.511426] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.836 [2024-04-15 22:34:55.511440] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.836 [2024-04-15 22:34:55.511449] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.836 [2024-04-15 22:34:55.511576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.836 [2024-04-15 22:34:55.511688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.836 [2024-04-15 22:34:55.511855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.836 [2024-04-15 22:34:55.511856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.407 22:34:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:11.407 22:34:56 -- common/autotest_common.sh@852 -- # return 0 00:08:11.407 22:34:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:11.407 22:34:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:11.407 22:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:11.407 22:34:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.407 22:34:56 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:11.407 22:34:56 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:11.407 22:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.407 22:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:11.407 [2024-04-15 22:34:56.188798] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.407 22:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.407 22:34:56 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:11.407 22:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.407 22:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:11.667 Malloc1 00:08:11.667 22:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.667 22:34:56 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:11.667 22:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.667 22:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:11.667 22:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.667 22:34:56 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:11.667 22:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.667 22:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:11.667 22:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.667 22:34:56 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:11.667 22:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.667 22:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:11.668 [2024-04-15 22:34:56.316627] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.668 22:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.668 22:34:56 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:11.668 22:34:56 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:11.668 22:34:56 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:11.668 22:34:56 -- common/autotest_common.sh@1359 -- # local bs 00:08:11.668 22:34:56 -- common/autotest_common.sh@1360 -- # local nb 00:08:11.668 22:34:56 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:11.668 22:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.668 22:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:11.668 22:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.668 22:34:56 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:11.668 { 00:08:11.668 "name": "Malloc1", 00:08:11.668 "aliases": [ 00:08:11.668 "b61cb73f-1cf8-4539-976c-8c139c410d92" 00:08:11.668 ], 00:08:11.668 "product_name": "Malloc disk", 00:08:11.668 "block_size": 512, 00:08:11.668 "num_blocks": 1048576, 00:08:11.668 "uuid": "b61cb73f-1cf8-4539-976c-8c139c410d92", 00:08:11.668 "assigned_rate_limits": { 00:08:11.668 "rw_ios_per_sec": 0, 00:08:11.668 "rw_mbytes_per_sec": 0, 00:08:11.668 "r_mbytes_per_sec": 0, 00:08:11.668 "w_mbytes_per_sec": 0 00:08:11.668 }, 00:08:11.668 "claimed": true, 00:08:11.668 "claim_type": "exclusive_write", 00:08:11.668 "zoned": false, 00:08:11.668 "supported_io_types": { 00:08:11.668 "read": true, 00:08:11.668 "write": true, 00:08:11.668 "unmap": true, 00:08:11.668 "write_zeroes": true, 00:08:11.668 "flush": true, 00:08:11.668 "reset": true, 00:08:11.668 "compare": false, 00:08:11.668 "compare_and_write": false, 00:08:11.668 "abort": true, 00:08:11.668 "nvme_admin": false, 00:08:11.668 "nvme_io": false 00:08:11.668 }, 00:08:11.668 "memory_domains": [ 00:08:11.668 { 00:08:11.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.668 "dma_device_type": 2 00:08:11.668 } 00:08:11.668 ], 00:08:11.668 "driver_specific": {} 00:08:11.668 } 00:08:11.668 ]' 00:08:11.668 22:34:56 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:11.668 22:34:56 -- common/autotest_common.sh@1362 -- # bs=512 00:08:11.668 22:34:56 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:11.668 22:34:56 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:11.668 22:34:56 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:11.668 22:34:56 -- common/autotest_common.sh@1367 -- # echo 512 00:08:11.668 22:34:56 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:11.668 22:34:56 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:13.583 22:34:57 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:13.583 22:34:57 -- common/autotest_common.sh@1177 -- # local i=0 00:08:13.583 22:34:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:13.583 22:34:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:13.583 22:34:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:15.499 22:34:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:15.499 22:34:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:15.499 22:34:59 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:15.499 22:35:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:15.499 22:35:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:15.499 22:35:00 -- common/autotest_common.sh@1187 -- # return 0 00:08:15.499 22:35:00 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:15.499 22:35:00 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:15.499 22:35:00 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:15.499 22:35:00 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:15.499 22:35:00 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:15.499 22:35:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:15.499 22:35:00 -- setup/common.sh@80 -- # echo 536870912 00:08:15.499 22:35:00 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:15.499 22:35:00 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:15.499 22:35:00 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:15.499 22:35:00 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:15.760 22:35:00 -- target/filesystem.sh@69 -- # partprobe 00:08:16.364 22:35:01 -- target/filesystem.sh@70 -- # sleep 1 00:08:17.747 22:35:02 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:17.747 22:35:02 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:17.747 22:35:02 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:17.747 22:35:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.747 22:35:02 -- common/autotest_common.sh@10 -- # set +x 00:08:17.747 ************************************ 00:08:17.747 START TEST filesystem_in_capsule_ext4 00:08:17.747 ************************************ 00:08:17.747 22:35:02 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:17.747 22:35:02 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:17.747 22:35:02 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:17.747 22:35:02 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:17.747 22:35:02 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:17.747 22:35:02 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:17.747 22:35:02 -- common/autotest_common.sh@904 -- # local i=0 00:08:17.747 22:35:02 -- common/autotest_common.sh@905 -- # local force 00:08:17.747 22:35:02 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:17.747 22:35:02 -- common/autotest_common.sh@908 -- # force=-F 00:08:17.747 22:35:02 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:17.747 mke2fs 1.46.5 (30-Dec-2021) 00:08:17.747 Discarding device blocks: 0/522240 done 00:08:17.747 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:17.747 Filesystem UUID: dba20e82-0386-468b-8149-62007aa447c7 00:08:17.747 Superblock backups stored on blocks: 00:08:17.747 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:17.747 00:08:17.747 Allocating group tables: 0/64 done 00:08:17.747 Writing inode tables: 0/64 done 00:08:17.747 Creating journal (8192 blocks): done 00:08:18.837 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:08:18.837 00:08:18.837 22:35:03 -- common/autotest_common.sh@921 -- # return 0 00:08:18.837 22:35:03 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:19.097 22:35:03 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:19.097 22:35:03 -- target/filesystem.sh@25 -- # sync 00:08:19.097 22:35:03 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:19.097 22:35:03 -- target/filesystem.sh@27 -- # sync 00:08:19.097 22:35:03 -- target/filesystem.sh@29 -- # i=0 00:08:19.097 22:35:03 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:19.371 22:35:03 -- target/filesystem.sh@37 -- # kill -0 932224 00:08:19.371 22:35:03 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:19.371 22:35:03 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:19.371 22:35:03 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:19.371 22:35:03 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:19.371 00:08:19.371 real 0m1.759s 00:08:19.371 user 0m0.030s 00:08:19.371 sys 0m0.071s 00:08:19.371 22:35:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.371 22:35:03 -- common/autotest_common.sh@10 -- # set +x 00:08:19.371 ************************************ 00:08:19.371 END TEST filesystem_in_capsule_ext4 00:08:19.371 ************************************ 00:08:19.371 22:35:03 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:19.371 22:35:03 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:19.371 22:35:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.371 22:35:03 -- common/autotest_common.sh@10 -- # set +x 00:08:19.371 ************************************ 00:08:19.371 START TEST filesystem_in_capsule_btrfs 00:08:19.371 ************************************ 00:08:19.371 22:35:03 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:19.371 22:35:03 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:19.371 22:35:03 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:19.371 22:35:03 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:19.371 22:35:03 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:19.371 22:35:03 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:19.371 22:35:03 -- common/autotest_common.sh@904 -- # local i=0 00:08:19.371 22:35:03 -- common/autotest_common.sh@905 -- # local force 00:08:19.371 22:35:03 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:19.371 22:35:03 -- common/autotest_common.sh@910 -- # force=-f 00:08:19.371 22:35:03 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:19.371 btrfs-progs v6.6.2 00:08:19.371 See https://btrfs.readthedocs.io for more information. 00:08:19.371 00:08:19.371 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:19.371 NOTE: several default settings have changed in version 5.15, please make sure 00:08:19.371 this does not affect your deployments: 00:08:19.371 - DUP for metadata (-m dup) 00:08:19.371 - enabled no-holes (-O no-holes) 00:08:19.371 - enabled free-space-tree (-R free-space-tree) 00:08:19.371 00:08:19.371 Label: (null) 00:08:19.371 UUID: ac53944c-1cbd-47d4-9d5a-f38acc2cebc0 00:08:19.371 Node size: 16384 00:08:19.371 Sector size: 4096 00:08:19.371 Filesystem size: 510.00MiB 00:08:19.371 Block group profiles: 00:08:19.371 Data: single 8.00MiB 00:08:19.371 Metadata: DUP 32.00MiB 00:08:19.371 System: DUP 8.00MiB 00:08:19.371 SSD detected: yes 00:08:19.371 Zoned device: no 00:08:19.371 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:19.371 Runtime features: free-space-tree 00:08:19.371 Checksum: crc32c 00:08:19.371 Number of devices: 1 00:08:19.371 Devices: 00:08:19.371 ID SIZE PATH 00:08:19.371 1 510.00MiB /dev/nvme0n1p1 00:08:19.371 00:08:19.371 22:35:04 -- common/autotest_common.sh@921 -- # return 0 00:08:19.371 22:35:04 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:19.688 22:35:04 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:19.688 22:35:04 -- target/filesystem.sh@25 -- # sync 00:08:19.949 22:35:04 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:19.949 22:35:04 -- target/filesystem.sh@27 -- # sync 00:08:19.949 22:35:04 -- target/filesystem.sh@29 -- # i=0 00:08:19.949 22:35:04 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:19.949 22:35:04 -- target/filesystem.sh@37 -- # kill -0 932224 00:08:19.949 22:35:04 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:19.949 22:35:04 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:19.949 22:35:04 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:19.949 22:35:04 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:19.949 00:08:19.949 real 0m0.584s 00:08:19.949 user 0m0.026s 00:08:19.949 sys 0m0.137s 00:08:19.949 22:35:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.949 22:35:04 -- common/autotest_common.sh@10 -- # set +x 00:08:19.949 ************************************ 00:08:19.949 END TEST filesystem_in_capsule_btrfs 00:08:19.949 ************************************ 00:08:19.949 22:35:04 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:19.949 22:35:04 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:19.949 22:35:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.949 22:35:04 -- common/autotest_common.sh@10 -- # set +x 00:08:19.949 ************************************ 00:08:19.949 START TEST filesystem_in_capsule_xfs 00:08:19.949 ************************************ 00:08:19.949 22:35:04 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:19.949 22:35:04 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:19.949 22:35:04 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:19.949 22:35:04 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:19.949 22:35:04 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:19.949 22:35:04 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:19.949 22:35:04 -- common/autotest_common.sh@904 -- # local i=0 00:08:19.949 22:35:04 -- common/autotest_common.sh@905 -- # local force 00:08:19.949 22:35:04 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:19.949 22:35:04 -- common/autotest_common.sh@910 -- # force=-f 00:08:19.949 22:35:04 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:19.949 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:19.949 = sectsz=512 attr=2, projid32bit=1 00:08:19.949 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:19.949 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:19.949 data = bsize=4096 blocks=130560, imaxpct=25 00:08:19.949 = sunit=0 swidth=0 blks 00:08:19.949 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:19.949 log =internal log bsize=4096 blocks=16384, version=2 00:08:19.949 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:19.949 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:20.892 Discarding blocks...Done. 00:08:20.892 22:35:05 -- common/autotest_common.sh@921 -- # return 0 00:08:20.892 22:35:05 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:22.803 22:35:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:22.803 22:35:07 -- target/filesystem.sh@25 -- # sync 00:08:22.803 22:35:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:23.064 22:35:07 -- target/filesystem.sh@27 -- # sync 00:08:23.064 22:35:07 -- target/filesystem.sh@29 -- # i=0 00:08:23.064 22:35:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:23.064 22:35:07 -- target/filesystem.sh@37 -- # kill -0 932224 00:08:23.064 22:35:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:23.064 22:35:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:23.064 22:35:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:23.064 22:35:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:23.064 00:08:23.064 real 0m3.051s 00:08:23.064 user 0m0.036s 00:08:23.064 sys 0m0.068s 00:08:23.064 22:35:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.064 22:35:07 -- common/autotest_common.sh@10 -- # set +x 00:08:23.064 ************************************ 00:08:23.064 END TEST filesystem_in_capsule_xfs 00:08:23.064 ************************************ 00:08:23.064 22:35:07 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:23.064 22:35:07 -- target/filesystem.sh@93 -- # sync 00:08:23.064 22:35:07 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:23.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.324 22:35:07 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:23.324 22:35:07 -- common/autotest_common.sh@1198 -- # local i=0 00:08:23.324 22:35:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:23.324 22:35:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.324 22:35:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:23.324 22:35:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.324 22:35:07 -- common/autotest_common.sh@1210 -- # return 0 00:08:23.324 22:35:07 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:23.324 22:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.324 22:35:07 -- common/autotest_common.sh@10 -- # set +x 00:08:23.324 22:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.324 22:35:07 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:23.324 22:35:07 -- target/filesystem.sh@101 -- # killprocess 932224 00:08:23.324 22:35:07 -- common/autotest_common.sh@926 -- # '[' -z 932224 ']' 00:08:23.324 22:35:07 -- common/autotest_common.sh@930 -- # kill -0 932224 00:08:23.324 22:35:07 -- common/autotest_common.sh@931 -- # uname 00:08:23.324 22:35:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:23.324 22:35:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 932224 00:08:23.324 22:35:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:23.324 22:35:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:23.324 22:35:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 932224' 00:08:23.324 killing process with pid 932224 00:08:23.324 22:35:07 -- common/autotest_common.sh@945 -- # kill 932224 00:08:23.324 22:35:07 -- common/autotest_common.sh@950 -- # wait 932224 00:08:23.583 22:35:08 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:23.583 00:08:23.583 real 0m12.887s 00:08:23.583 user 0m50.782s 00:08:23.583 sys 0m1.162s 00:08:23.583 22:35:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.583 22:35:08 -- common/autotest_common.sh@10 -- # set +x 00:08:23.583 ************************************ 00:08:23.583 END TEST nvmf_filesystem_in_capsule 00:08:23.583 ************************************ 00:08:23.583 22:35:08 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:23.583 22:35:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:23.583 22:35:08 -- nvmf/common.sh@116 -- # sync 00:08:23.583 22:35:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:23.583 22:35:08 -- nvmf/common.sh@119 -- # set +e 00:08:23.583 22:35:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:23.584 22:35:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:23.584 rmmod nvme_tcp 00:08:23.584 rmmod nvme_fabrics 00:08:23.584 rmmod nvme_keyring 00:08:23.584 22:35:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:23.584 22:35:08 -- nvmf/common.sh@123 -- # set -e 00:08:23.584 22:35:08 -- nvmf/common.sh@124 -- # return 0 00:08:23.584 22:35:08 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:23.584 22:35:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:23.584 22:35:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:23.584 22:35:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:23.584 22:35:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:23.584 22:35:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:23.584 22:35:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.584 22:35:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.584 22:35:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.128 22:35:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:26.128 00:08:26.128 real 0m37.400s 00:08:26.128 user 1m48.537s 00:08:26.128 sys 0m8.352s 00:08:26.128 22:35:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.128 22:35:10 -- common/autotest_common.sh@10 -- # set +x 00:08:26.128 ************************************ 00:08:26.128 END TEST nvmf_filesystem 00:08:26.128 ************************************ 00:08:26.128 22:35:10 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:26.128 22:35:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:26.128 22:35:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.128 22:35:10 -- common/autotest_common.sh@10 -- # set +x 00:08:26.128 ************************************ 00:08:26.128 START TEST nvmf_discovery 00:08:26.128 ************************************ 00:08:26.128 22:35:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:26.128 * Looking for test storage... 00:08:26.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:26.128 22:35:10 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.128 22:35:10 -- nvmf/common.sh@7 -- # uname -s 00:08:26.128 22:35:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.128 22:35:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.128 22:35:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.128 22:35:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.128 22:35:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.128 22:35:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.128 22:35:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.128 22:35:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.128 22:35:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.128 22:35:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.128 22:35:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:26.129 22:35:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:26.129 22:35:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.129 22:35:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.129 22:35:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.129 22:35:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:26.129 22:35:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.129 22:35:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.129 22:35:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.129 22:35:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.129 22:35:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.129 22:35:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.129 22:35:10 -- paths/export.sh@5 -- # export PATH 00:08:26.129 22:35:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.129 22:35:10 -- nvmf/common.sh@46 -- # : 0 00:08:26.129 22:35:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:26.129 22:35:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:26.129 22:35:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:26.129 22:35:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.129 22:35:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.129 22:35:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:26.129 22:35:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:26.129 22:35:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:26.129 22:35:10 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:26.129 22:35:10 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:26.129 22:35:10 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:26.129 22:35:10 -- target/discovery.sh@15 -- # hash nvme 00:08:26.129 22:35:10 -- target/discovery.sh@20 -- # nvmftestinit 00:08:26.129 22:35:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:26.129 22:35:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.129 22:35:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:26.129 22:35:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:26.129 22:35:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:26.129 22:35:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.129 22:35:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.129 22:35:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.129 22:35:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:26.129 22:35:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:26.129 22:35:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:26.129 22:35:10 -- common/autotest_common.sh@10 -- # set +x 00:08:34.271 22:35:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:34.271 22:35:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:34.271 22:35:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:34.271 22:35:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:34.271 22:35:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:34.271 22:35:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:34.271 22:35:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:34.271 22:35:18 -- nvmf/common.sh@294 -- # net_devs=() 00:08:34.271 22:35:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:34.271 22:35:18 -- nvmf/common.sh@295 -- # e810=() 00:08:34.271 22:35:18 -- nvmf/common.sh@295 -- # local -ga e810 00:08:34.271 22:35:18 -- nvmf/common.sh@296 -- # x722=() 00:08:34.271 22:35:18 -- nvmf/common.sh@296 -- # local -ga x722 00:08:34.271 22:35:18 -- nvmf/common.sh@297 -- # mlx=() 00:08:34.271 22:35:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:34.271 22:35:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.271 22:35:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.271 22:35:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.271 22:35:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.271 22:35:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.271 22:35:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.271 22:35:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.271 22:35:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.271 22:35:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.271 22:35:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.271 22:35:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.271 22:35:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:34.271 22:35:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:34.271 22:35:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:34.271 22:35:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:34.271 22:35:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:34.271 22:35:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:34.271 22:35:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:34.271 22:35:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:34.271 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:34.271 22:35:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:34.271 22:35:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:34.271 22:35:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.271 22:35:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.271 22:35:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:34.271 22:35:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:34.271 22:35:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:34.271 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:34.271 22:35:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:34.271 22:35:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:34.271 22:35:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.271 22:35:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.271 22:35:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:34.271 22:35:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:34.271 22:35:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:34.271 22:35:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:34.271 22:35:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:34.271 22:35:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.271 22:35:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:34.271 22:35:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.272 22:35:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:34.272 Found net devices under 0000:31:00.0: cvl_0_0 00:08:34.272 22:35:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.272 22:35:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:34.272 22:35:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.272 22:35:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:34.272 22:35:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.272 22:35:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:34.272 Found net devices under 0000:31:00.1: cvl_0_1 00:08:34.272 22:35:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.272 22:35:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:34.272 22:35:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:34.272 22:35:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:34.272 22:35:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:34.272 22:35:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:34.272 22:35:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.272 22:35:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.272 22:35:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.272 22:35:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:34.272 22:35:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.272 22:35:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.272 22:35:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:34.272 22:35:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.272 22:35:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.272 22:35:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:34.272 22:35:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:34.272 22:35:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.272 22:35:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.272 22:35:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.272 22:35:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.272 22:35:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:34.272 22:35:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.272 22:35:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.272 22:35:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.272 22:35:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:34.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:08:34.272 00:08:34.272 --- 10.0.0.2 ping statistics --- 00:08:34.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.272 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:08:34.272 22:35:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:08:34.272 00:08:34.272 --- 10.0.0.1 ping statistics --- 00:08:34.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.272 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:08:34.272 22:35:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.272 22:35:18 -- nvmf/common.sh@410 -- # return 0 00:08:34.272 22:35:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:34.272 22:35:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.272 22:35:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:34.272 22:35:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:34.272 22:35:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.272 22:35:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:34.272 22:35:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:34.272 22:35:18 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:34.272 22:35:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:34.272 22:35:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:34.272 22:35:18 -- common/autotest_common.sh@10 -- # set +x 00:08:34.272 22:35:18 -- nvmf/common.sh@469 -- # nvmfpid=939800 00:08:34.272 22:35:18 -- nvmf/common.sh@470 -- # waitforlisten 939800 00:08:34.272 22:35:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:34.272 22:35:18 -- common/autotest_common.sh@819 -- # '[' -z 939800 ']' 00:08:34.272 22:35:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.272 22:35:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:34.272 22:35:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.272 22:35:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:34.272 22:35:18 -- common/autotest_common.sh@10 -- # set +x 00:08:34.272 [2024-04-15 22:35:18.658083] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:34.272 [2024-04-15 22:35:18.658187] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.272 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.272 [2024-04-15 22:35:18.741757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.272 [2024-04-15 22:35:18.813496] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:34.272 [2024-04-15 22:35:18.813636] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.272 [2024-04-15 22:35:18.813645] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.272 [2024-04-15 22:35:18.813654] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.272 [2024-04-15 22:35:18.813795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.272 [2024-04-15 22:35:18.813921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.272 [2024-04-15 22:35:18.814084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.272 [2024-04-15 22:35:18.814084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.842 22:35:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:34.843 22:35:19 -- common/autotest_common.sh@852 -- # return 0 00:08:34.843 22:35:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:34.843 22:35:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:34.843 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:34.843 22:35:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.843 22:35:19 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:34.843 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.843 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:34.843 [2024-04-15 22:35:19.473676] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.843 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.843 22:35:19 -- target/discovery.sh@26 -- # seq 1 4 00:08:34.843 22:35:19 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:34.843 22:35:19 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:34.843 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.843 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:34.843 Null1 00:08:34.843 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.843 22:35:19 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:34.843 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.843 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:34.843 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.843 22:35:19 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:34.843 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.843 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:34.843 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.843 22:35:19 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:34.843 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.843 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:34.843 [2024-04-15 22:35:19.533987] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.843 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.843 22:35:19 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:34.843 22:35:19 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:34.843 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.843 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:34.843 Null2 00:08:34.843 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.843 22:35:19 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:34.843 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.843 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:34.843 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.843 22:35:19 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:34.843 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.843 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:34.843 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.843 22:35:19 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:34.843 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.843 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:34.843 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.843 22:35:19 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:34.843 22:35:19 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:34.843 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.843 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:34.843 Null3 00:08:34.843 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.843 22:35:19 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:34.843 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.843 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:34.843 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.843 22:35:19 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:34.843 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.843 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:34.843 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.843 22:35:19 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:34.843 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.843 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:34.843 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.843 22:35:19 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:34.843 22:35:19 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:34.843 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.843 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:34.843 Null4 00:08:34.843 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.843 22:35:19 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:34.843 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.843 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:35.104 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.104 22:35:19 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:35.104 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.104 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:35.104 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.104 22:35:19 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:35.104 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.104 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:35.104 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.104 22:35:19 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:35.104 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.104 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:35.104 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.104 22:35:19 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:35.104 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.104 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:35.104 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.104 22:35:19 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:08:35.365 00:08:35.365 Discovery Log Number of Records 6, Generation counter 6 00:08:35.365 =====Discovery Log Entry 0====== 00:08:35.365 trtype: tcp 00:08:35.365 adrfam: ipv4 00:08:35.365 subtype: current discovery subsystem 00:08:35.365 treq: not required 00:08:35.365 portid: 0 00:08:35.365 trsvcid: 4420 00:08:35.365 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:35.365 traddr: 10.0.0.2 00:08:35.365 eflags: explicit discovery connections, duplicate discovery information 00:08:35.365 sectype: none 00:08:35.365 =====Discovery Log Entry 1====== 00:08:35.365 trtype: tcp 00:08:35.365 adrfam: ipv4 00:08:35.365 subtype: nvme subsystem 00:08:35.365 treq: not required 00:08:35.365 portid: 0 00:08:35.365 trsvcid: 4420 00:08:35.365 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:35.365 traddr: 10.0.0.2 00:08:35.365 eflags: none 00:08:35.365 sectype: none 00:08:35.365 =====Discovery Log Entry 2====== 00:08:35.365 trtype: tcp 00:08:35.365 adrfam: ipv4 00:08:35.365 subtype: nvme subsystem 00:08:35.365 treq: not required 00:08:35.365 portid: 0 00:08:35.365 trsvcid: 4420 00:08:35.365 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:35.365 traddr: 10.0.0.2 00:08:35.365 eflags: none 00:08:35.365 sectype: none 00:08:35.365 =====Discovery Log Entry 3====== 00:08:35.365 trtype: tcp 00:08:35.365 adrfam: ipv4 00:08:35.365 subtype: nvme subsystem 00:08:35.365 treq: not required 00:08:35.365 portid: 0 00:08:35.365 trsvcid: 4420 00:08:35.365 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:35.365 traddr: 10.0.0.2 00:08:35.365 eflags: none 00:08:35.365 sectype: none 00:08:35.365 =====Discovery Log Entry 4====== 00:08:35.365 trtype: tcp 00:08:35.365 adrfam: ipv4 00:08:35.365 subtype: nvme subsystem 00:08:35.365 treq: not required 00:08:35.365 portid: 0 00:08:35.365 trsvcid: 4420 00:08:35.365 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:35.365 traddr: 10.0.0.2 00:08:35.365 eflags: none 00:08:35.365 sectype: none 00:08:35.365 =====Discovery Log Entry 5====== 00:08:35.365 trtype: tcp 00:08:35.365 adrfam: ipv4 00:08:35.365 subtype: discovery subsystem referral 00:08:35.365 treq: not required 00:08:35.365 portid: 0 00:08:35.365 trsvcid: 4430 00:08:35.365 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:35.365 traddr: 10.0.0.2 00:08:35.365 eflags: none 00:08:35.365 sectype: none 00:08:35.365 22:35:19 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:35.365 Perform nvmf subsystem discovery via RPC 00:08:35.365 22:35:19 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:35.365 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.365 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:35.365 [2024-04-15 22:35:19.923162] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:35.365 [ 00:08:35.365 { 00:08:35.365 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:35.365 "subtype": "Discovery", 00:08:35.365 "listen_addresses": [ 00:08:35.365 { 00:08:35.365 "transport": "TCP", 00:08:35.365 "trtype": "TCP", 00:08:35.365 "adrfam": "IPv4", 00:08:35.365 "traddr": "10.0.0.2", 00:08:35.365 "trsvcid": "4420" 00:08:35.365 } 00:08:35.365 ], 00:08:35.365 "allow_any_host": true, 00:08:35.365 "hosts": [] 00:08:35.365 }, 00:08:35.365 { 00:08:35.365 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.365 "subtype": "NVMe", 00:08:35.365 "listen_addresses": [ 00:08:35.365 { 00:08:35.365 "transport": "TCP", 00:08:35.365 "trtype": "TCP", 00:08:35.365 "adrfam": "IPv4", 00:08:35.365 "traddr": "10.0.0.2", 00:08:35.365 "trsvcid": "4420" 00:08:35.365 } 00:08:35.365 ], 00:08:35.365 "allow_any_host": true, 00:08:35.365 "hosts": [], 00:08:35.365 "serial_number": "SPDK00000000000001", 00:08:35.365 "model_number": "SPDK bdev Controller", 00:08:35.365 "max_namespaces": 32, 00:08:35.365 "min_cntlid": 1, 00:08:35.365 "max_cntlid": 65519, 00:08:35.365 "namespaces": [ 00:08:35.365 { 00:08:35.365 "nsid": 1, 00:08:35.365 "bdev_name": "Null1", 00:08:35.365 "name": "Null1", 00:08:35.365 "nguid": "5D87F193E0C64C7780A664BE6645C058", 00:08:35.365 "uuid": "5d87f193-e0c6-4c77-80a6-64be6645c058" 00:08:35.365 } 00:08:35.365 ] 00:08:35.365 }, 00:08:35.365 { 00:08:35.365 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:35.365 "subtype": "NVMe", 00:08:35.365 "listen_addresses": [ 00:08:35.365 { 00:08:35.365 "transport": "TCP", 00:08:35.366 "trtype": "TCP", 00:08:35.366 "adrfam": "IPv4", 00:08:35.366 "traddr": "10.0.0.2", 00:08:35.366 "trsvcid": "4420" 00:08:35.366 } 00:08:35.366 ], 00:08:35.366 "allow_any_host": true, 00:08:35.366 "hosts": [], 00:08:35.366 "serial_number": "SPDK00000000000002", 00:08:35.366 "model_number": "SPDK bdev Controller", 00:08:35.366 "max_namespaces": 32, 00:08:35.366 "min_cntlid": 1, 00:08:35.366 "max_cntlid": 65519, 00:08:35.366 "namespaces": [ 00:08:35.366 { 00:08:35.366 "nsid": 1, 00:08:35.366 "bdev_name": "Null2", 00:08:35.366 "name": "Null2", 00:08:35.366 "nguid": "15BC4B7A07C849A3A39E77153C2AF942", 00:08:35.366 "uuid": "15bc4b7a-07c8-49a3-a39e-77153c2af942" 00:08:35.366 } 00:08:35.366 ] 00:08:35.366 }, 00:08:35.366 { 00:08:35.366 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:35.366 "subtype": "NVMe", 00:08:35.366 "listen_addresses": [ 00:08:35.366 { 00:08:35.366 "transport": "TCP", 00:08:35.366 "trtype": "TCP", 00:08:35.366 "adrfam": "IPv4", 00:08:35.366 "traddr": "10.0.0.2", 00:08:35.366 "trsvcid": "4420" 00:08:35.366 } 00:08:35.366 ], 00:08:35.366 "allow_any_host": true, 00:08:35.366 "hosts": [], 00:08:35.366 "serial_number": "SPDK00000000000003", 00:08:35.366 "model_number": "SPDK bdev Controller", 00:08:35.366 "max_namespaces": 32, 00:08:35.366 "min_cntlid": 1, 00:08:35.366 "max_cntlid": 65519, 00:08:35.366 "namespaces": [ 00:08:35.366 { 00:08:35.366 "nsid": 1, 00:08:35.366 "bdev_name": "Null3", 00:08:35.366 "name": "Null3", 00:08:35.366 "nguid": "7E7C8C1D5DEB469F8BC9A14BBE53AE47", 00:08:35.366 "uuid": "7e7c8c1d-5deb-469f-8bc9-a14bbe53ae47" 00:08:35.366 } 00:08:35.366 ] 00:08:35.366 }, 00:08:35.366 { 00:08:35.366 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:35.366 "subtype": "NVMe", 00:08:35.366 "listen_addresses": [ 00:08:35.366 { 00:08:35.366 "transport": "TCP", 00:08:35.366 "trtype": "TCP", 00:08:35.366 "adrfam": "IPv4", 00:08:35.366 "traddr": "10.0.0.2", 00:08:35.366 "trsvcid": "4420" 00:08:35.366 } 00:08:35.366 ], 00:08:35.366 "allow_any_host": true, 00:08:35.366 "hosts": [], 00:08:35.366 "serial_number": "SPDK00000000000004", 00:08:35.366 "model_number": "SPDK bdev Controller", 00:08:35.366 "max_namespaces": 32, 00:08:35.366 "min_cntlid": 1, 00:08:35.366 "max_cntlid": 65519, 00:08:35.366 "namespaces": [ 00:08:35.366 { 00:08:35.366 "nsid": 1, 00:08:35.366 "bdev_name": "Null4", 00:08:35.366 "name": "Null4", 00:08:35.366 "nguid": "3983A2A9077340078F4121219FE599C5", 00:08:35.366 "uuid": "3983a2a9-0773-4007-8f41-21219fe599c5" 00:08:35.366 } 00:08:35.366 ] 00:08:35.366 } 00:08:35.366 ] 00:08:35.366 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.366 22:35:19 -- target/discovery.sh@42 -- # seq 1 4 00:08:35.366 22:35:19 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:35.366 22:35:19 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:35.366 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.366 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:35.366 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.366 22:35:19 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:35.366 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.366 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:35.366 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.366 22:35:19 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:35.366 22:35:19 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:35.366 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.366 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:35.366 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.366 22:35:19 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:35.366 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.366 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:35.366 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.366 22:35:19 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:35.366 22:35:19 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:35.366 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.366 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:35.366 22:35:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.366 22:35:19 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:35.366 22:35:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.366 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:35.366 22:35:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.366 22:35:20 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:35.366 22:35:20 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:35.366 22:35:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.366 22:35:20 -- common/autotest_common.sh@10 -- # set +x 00:08:35.366 22:35:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.366 22:35:20 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:35.366 22:35:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.366 22:35:20 -- common/autotest_common.sh@10 -- # set +x 00:08:35.366 22:35:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.366 22:35:20 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:35.366 22:35:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.366 22:35:20 -- common/autotest_common.sh@10 -- # set +x 00:08:35.366 22:35:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.366 22:35:20 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:35.366 22:35:20 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:35.366 22:35:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.366 22:35:20 -- common/autotest_common.sh@10 -- # set +x 00:08:35.366 22:35:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.366 22:35:20 -- target/discovery.sh@49 -- # check_bdevs= 00:08:35.366 22:35:20 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:35.366 22:35:20 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:35.366 22:35:20 -- target/discovery.sh@57 -- # nvmftestfini 00:08:35.366 22:35:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:35.366 22:35:20 -- nvmf/common.sh@116 -- # sync 00:08:35.366 22:35:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:35.366 22:35:20 -- nvmf/common.sh@119 -- # set +e 00:08:35.366 22:35:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:35.366 22:35:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:35.366 rmmod nvme_tcp 00:08:35.366 rmmod nvme_fabrics 00:08:35.366 rmmod nvme_keyring 00:08:35.366 22:35:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:35.367 22:35:20 -- nvmf/common.sh@123 -- # set -e 00:08:35.367 22:35:20 -- nvmf/common.sh@124 -- # return 0 00:08:35.367 22:35:20 -- nvmf/common.sh@477 -- # '[' -n 939800 ']' 00:08:35.367 22:35:20 -- nvmf/common.sh@478 -- # killprocess 939800 00:08:35.367 22:35:20 -- common/autotest_common.sh@926 -- # '[' -z 939800 ']' 00:08:35.367 22:35:20 -- common/autotest_common.sh@930 -- # kill -0 939800 00:08:35.367 22:35:20 -- common/autotest_common.sh@931 -- # uname 00:08:35.367 22:35:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:35.367 22:35:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 939800 00:08:35.628 22:35:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:35.628 22:35:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:35.628 22:35:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 939800' 00:08:35.628 killing process with pid 939800 00:08:35.628 22:35:20 -- common/autotest_common.sh@945 -- # kill 939800 00:08:35.628 [2024-04-15 22:35:20.211856] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:35.628 22:35:20 -- common/autotest_common.sh@950 -- # wait 939800 00:08:35.628 22:35:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:35.628 22:35:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:35.628 22:35:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:35.628 22:35:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.628 22:35:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:35.628 22:35:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.628 22:35:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.628 22:35:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.175 22:35:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:38.175 00:08:38.175 real 0m11.989s 00:08:38.175 user 0m8.650s 00:08:38.175 sys 0m6.229s 00:08:38.175 22:35:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.175 22:35:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.175 ************************************ 00:08:38.175 END TEST nvmf_discovery 00:08:38.175 ************************************ 00:08:38.175 22:35:22 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:38.175 22:35:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:38.175 22:35:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:38.175 22:35:22 -- common/autotest_common.sh@10 -- # set +x 00:08:38.175 ************************************ 00:08:38.175 START TEST nvmf_referrals 00:08:38.175 ************************************ 00:08:38.175 22:35:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:38.175 * Looking for test storage... 00:08:38.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.175 22:35:22 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.175 22:35:22 -- nvmf/common.sh@7 -- # uname -s 00:08:38.175 22:35:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.175 22:35:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.175 22:35:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.175 22:35:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.175 22:35:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.175 22:35:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.175 22:35:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.175 22:35:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.175 22:35:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.175 22:35:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.175 22:35:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:38.175 22:35:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:38.175 22:35:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.175 22:35:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.175 22:35:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.175 22:35:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.175 22:35:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.175 22:35:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.175 22:35:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.175 22:35:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.175 22:35:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.175 22:35:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.175 22:35:22 -- paths/export.sh@5 -- # export PATH 00:08:38.175 22:35:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.175 22:35:22 -- nvmf/common.sh@46 -- # : 0 00:08:38.175 22:35:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:38.175 22:35:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:38.175 22:35:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:38.175 22:35:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.175 22:35:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.175 22:35:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:38.175 22:35:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:38.175 22:35:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:38.175 22:35:22 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:38.175 22:35:22 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:38.176 22:35:22 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:38.176 22:35:22 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:38.176 22:35:22 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:38.176 22:35:22 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:38.176 22:35:22 -- target/referrals.sh@37 -- # nvmftestinit 00:08:38.176 22:35:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:38.176 22:35:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.176 22:35:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:38.176 22:35:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:38.176 22:35:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:38.176 22:35:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.176 22:35:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.176 22:35:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.176 22:35:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:38.176 22:35:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:38.176 22:35:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:38.176 22:35:22 -- common/autotest_common.sh@10 -- # set +x 00:08:46.321 22:35:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:46.321 22:35:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:46.321 22:35:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:46.321 22:35:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:46.321 22:35:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:46.321 22:35:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:46.321 22:35:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:46.321 22:35:30 -- nvmf/common.sh@294 -- # net_devs=() 00:08:46.321 22:35:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:46.321 22:35:30 -- nvmf/common.sh@295 -- # e810=() 00:08:46.321 22:35:30 -- nvmf/common.sh@295 -- # local -ga e810 00:08:46.321 22:35:30 -- nvmf/common.sh@296 -- # x722=() 00:08:46.321 22:35:30 -- nvmf/common.sh@296 -- # local -ga x722 00:08:46.321 22:35:30 -- nvmf/common.sh@297 -- # mlx=() 00:08:46.321 22:35:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:46.321 22:35:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.321 22:35:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.321 22:35:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.321 22:35:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.321 22:35:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.321 22:35:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.321 22:35:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.321 22:35:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.321 22:35:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.321 22:35:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.321 22:35:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.321 22:35:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:46.321 22:35:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:46.321 22:35:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:46.321 22:35:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:46.321 22:35:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:46.321 22:35:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:46.321 22:35:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:46.321 22:35:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:46.321 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:46.321 22:35:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:46.321 22:35:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:46.321 22:35:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.321 22:35:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.321 22:35:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:46.321 22:35:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:46.321 22:35:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:46.321 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:46.321 22:35:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:46.321 22:35:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:46.321 22:35:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.321 22:35:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.321 22:35:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:46.321 22:35:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:46.321 22:35:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:46.321 22:35:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:46.321 22:35:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:46.321 22:35:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.321 22:35:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:46.321 22:35:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.321 22:35:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:46.321 Found net devices under 0000:31:00.0: cvl_0_0 00:08:46.321 22:35:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.321 22:35:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:46.321 22:35:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.321 22:35:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:46.321 22:35:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.321 22:35:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:46.321 Found net devices under 0000:31:00.1: cvl_0_1 00:08:46.321 22:35:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.321 22:35:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:46.321 22:35:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:46.321 22:35:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:46.321 22:35:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:46.321 22:35:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:46.321 22:35:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.321 22:35:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.321 22:35:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.321 22:35:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:46.321 22:35:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.321 22:35:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.321 22:35:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:46.321 22:35:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.321 22:35:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.321 22:35:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:46.321 22:35:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:46.321 22:35:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.321 22:35:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.321 22:35:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.321 22:35:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.321 22:35:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:46.321 22:35:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.321 22:35:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.321 22:35:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.321 22:35:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:46.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:08:46.321 00:08:46.321 --- 10.0.0.2 ping statistics --- 00:08:46.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.321 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:08:46.321 22:35:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:08:46.321 00:08:46.321 --- 10.0.0.1 ping statistics --- 00:08:46.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.321 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:08:46.321 22:35:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.321 22:35:30 -- nvmf/common.sh@410 -- # return 0 00:08:46.321 22:35:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:46.321 22:35:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.321 22:35:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:46.321 22:35:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:46.321 22:35:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.321 22:35:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:46.321 22:35:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:46.321 22:35:30 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:46.321 22:35:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:46.321 22:35:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:46.321 22:35:30 -- common/autotest_common.sh@10 -- # set +x 00:08:46.321 22:35:30 -- nvmf/common.sh@469 -- # nvmfpid=944757 00:08:46.321 22:35:30 -- nvmf/common.sh@470 -- # waitforlisten 944757 00:08:46.321 22:35:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:46.321 22:35:30 -- common/autotest_common.sh@819 -- # '[' -z 944757 ']' 00:08:46.321 22:35:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.321 22:35:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:46.321 22:35:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.321 22:35:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:46.321 22:35:30 -- common/autotest_common.sh@10 -- # set +x 00:08:46.321 [2024-04-15 22:35:30.536603] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:46.321 [2024-04-15 22:35:30.536667] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.321 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.321 [2024-04-15 22:35:30.615932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.321 [2024-04-15 22:35:30.689781] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:46.321 [2024-04-15 22:35:30.689914] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.321 [2024-04-15 22:35:30.689924] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.321 [2024-04-15 22:35:30.689932] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.321 [2024-04-15 22:35:30.690055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.321 [2024-04-15 22:35:30.690178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.321 [2024-04-15 22:35:30.690339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.321 [2024-04-15 22:35:30.690340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.582 22:35:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:46.582 22:35:31 -- common/autotest_common.sh@852 -- # return 0 00:08:46.582 22:35:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:46.582 22:35:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:46.582 22:35:31 -- common/autotest_common.sh@10 -- # set +x 00:08:46.582 22:35:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.582 22:35:31 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:46.582 22:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.582 22:35:31 -- common/autotest_common.sh@10 -- # set +x 00:08:46.582 [2024-04-15 22:35:31.362637] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.582 22:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.582 22:35:31 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:46.582 22:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.582 22:35:31 -- common/autotest_common.sh@10 -- # set +x 00:08:46.582 [2024-04-15 22:35:31.378791] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:46.582 22:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.582 22:35:31 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:46.582 22:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.582 22:35:31 -- common/autotest_common.sh@10 -- # set +x 00:08:46.842 22:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.842 22:35:31 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:46.842 22:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.842 22:35:31 -- common/autotest_common.sh@10 -- # set +x 00:08:46.842 22:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.842 22:35:31 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:46.842 22:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.842 22:35:31 -- common/autotest_common.sh@10 -- # set +x 00:08:46.842 22:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.842 22:35:31 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:46.842 22:35:31 -- target/referrals.sh@48 -- # jq length 00:08:46.842 22:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.842 22:35:31 -- common/autotest_common.sh@10 -- # set +x 00:08:46.842 22:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.842 22:35:31 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:46.842 22:35:31 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:46.842 22:35:31 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:46.842 22:35:31 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:46.842 22:35:31 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:46.842 22:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.842 22:35:31 -- target/referrals.sh@21 -- # sort 00:08:46.842 22:35:31 -- common/autotest_common.sh@10 -- # set +x 00:08:46.842 22:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.842 22:35:31 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:46.842 22:35:31 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:46.842 22:35:31 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:46.842 22:35:31 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:46.842 22:35:31 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:46.842 22:35:31 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:46.842 22:35:31 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:46.842 22:35:31 -- target/referrals.sh@26 -- # sort 00:08:47.103 22:35:31 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:47.103 22:35:31 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:47.103 22:35:31 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:47.103 22:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.103 22:35:31 -- common/autotest_common.sh@10 -- # set +x 00:08:47.103 22:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.103 22:35:31 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:47.103 22:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.103 22:35:31 -- common/autotest_common.sh@10 -- # set +x 00:08:47.103 22:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.103 22:35:31 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:47.103 22:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.103 22:35:31 -- common/autotest_common.sh@10 -- # set +x 00:08:47.103 22:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.103 22:35:31 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:47.103 22:35:31 -- target/referrals.sh@56 -- # jq length 00:08:47.103 22:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.103 22:35:31 -- common/autotest_common.sh@10 -- # set +x 00:08:47.103 22:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.103 22:35:31 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:47.103 22:35:31 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:47.103 22:35:31 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:47.103 22:35:31 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:47.103 22:35:31 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.103 22:35:31 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:47.103 22:35:31 -- target/referrals.sh@26 -- # sort 00:08:47.103 22:35:31 -- target/referrals.sh@26 -- # echo 00:08:47.103 22:35:31 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:47.103 22:35:31 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:47.103 22:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.103 22:35:31 -- common/autotest_common.sh@10 -- # set +x 00:08:47.103 22:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.103 22:35:31 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:47.103 22:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.103 22:35:31 -- common/autotest_common.sh@10 -- # set +x 00:08:47.103 22:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.103 22:35:31 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:47.103 22:35:31 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:47.103 22:35:31 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:47.103 22:35:31 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:47.103 22:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.103 22:35:31 -- common/autotest_common.sh@10 -- # set +x 00:08:47.103 22:35:31 -- target/referrals.sh@21 -- # sort 00:08:47.103 22:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.103 22:35:31 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:47.364 22:35:31 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:47.364 22:35:31 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:47.364 22:35:31 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:47.364 22:35:31 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:47.364 22:35:31 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.364 22:35:31 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:47.364 22:35:31 -- target/referrals.sh@26 -- # sort 00:08:47.364 22:35:32 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:47.364 22:35:32 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:47.364 22:35:32 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:47.364 22:35:32 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:47.364 22:35:32 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:47.364 22:35:32 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.364 22:35:32 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:47.625 22:35:32 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:47.625 22:35:32 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:47.625 22:35:32 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:47.625 22:35:32 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:47.625 22:35:32 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.625 22:35:32 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:47.625 22:35:32 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:47.625 22:35:32 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:47.625 22:35:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.625 22:35:32 -- common/autotest_common.sh@10 -- # set +x 00:08:47.625 22:35:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.625 22:35:32 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:47.625 22:35:32 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:47.625 22:35:32 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:47.625 22:35:32 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:47.625 22:35:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.625 22:35:32 -- target/referrals.sh@21 -- # sort 00:08:47.625 22:35:32 -- common/autotest_common.sh@10 -- # set +x 00:08:47.625 22:35:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.625 22:35:32 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:47.625 22:35:32 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:47.625 22:35:32 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:47.625 22:35:32 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:47.625 22:35:32 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:47.625 22:35:32 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.625 22:35:32 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:47.625 22:35:32 -- target/referrals.sh@26 -- # sort 00:08:47.886 22:35:32 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:47.886 22:35:32 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:47.886 22:35:32 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:47.886 22:35:32 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:47.886 22:35:32 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:47.886 22:35:32 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.886 22:35:32 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:47.886 22:35:32 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:47.886 22:35:32 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:47.886 22:35:32 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:47.886 22:35:32 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:47.886 22:35:32 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.886 22:35:32 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:48.147 22:35:32 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:48.147 22:35:32 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:48.147 22:35:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.147 22:35:32 -- common/autotest_common.sh@10 -- # set +x 00:08:48.147 22:35:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.147 22:35:32 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:48.147 22:35:32 -- target/referrals.sh@82 -- # jq length 00:08:48.147 22:35:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.147 22:35:32 -- common/autotest_common.sh@10 -- # set +x 00:08:48.147 22:35:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.147 22:35:32 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:48.147 22:35:32 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:48.147 22:35:32 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:48.147 22:35:32 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:48.147 22:35:32 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:48.147 22:35:32 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:48.147 22:35:32 -- target/referrals.sh@26 -- # sort 00:08:48.147 22:35:32 -- target/referrals.sh@26 -- # echo 00:08:48.148 22:35:32 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:48.148 22:35:32 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:48.148 22:35:32 -- target/referrals.sh@86 -- # nvmftestfini 00:08:48.148 22:35:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:48.148 22:35:32 -- nvmf/common.sh@116 -- # sync 00:08:48.148 22:35:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:48.148 22:35:32 -- nvmf/common.sh@119 -- # set +e 00:08:48.148 22:35:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:48.148 22:35:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:48.148 rmmod nvme_tcp 00:08:48.148 rmmod nvme_fabrics 00:08:48.148 rmmod nvme_keyring 00:08:48.410 22:35:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:48.410 22:35:32 -- nvmf/common.sh@123 -- # set -e 00:08:48.410 22:35:32 -- nvmf/common.sh@124 -- # return 0 00:08:48.410 22:35:32 -- nvmf/common.sh@477 -- # '[' -n 944757 ']' 00:08:48.410 22:35:32 -- nvmf/common.sh@478 -- # killprocess 944757 00:08:48.410 22:35:32 -- common/autotest_common.sh@926 -- # '[' -z 944757 ']' 00:08:48.410 22:35:32 -- common/autotest_common.sh@930 -- # kill -0 944757 00:08:48.410 22:35:32 -- common/autotest_common.sh@931 -- # uname 00:08:48.410 22:35:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:48.410 22:35:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 944757 00:08:48.410 22:35:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:48.410 22:35:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:48.410 22:35:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 944757' 00:08:48.410 killing process with pid 944757 00:08:48.410 22:35:33 -- common/autotest_common.sh@945 -- # kill 944757 00:08:48.410 22:35:33 -- common/autotest_common.sh@950 -- # wait 944757 00:08:48.410 22:35:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:48.410 22:35:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:48.410 22:35:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:48.410 22:35:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:48.410 22:35:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:48.410 22:35:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.410 22:35:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.410 22:35:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.023 22:35:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:51.023 00:08:51.023 real 0m12.774s 00:08:51.023 user 0m13.020s 00:08:51.023 sys 0m6.512s 00:08:51.023 22:35:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.023 22:35:35 -- common/autotest_common.sh@10 -- # set +x 00:08:51.023 ************************************ 00:08:51.023 END TEST nvmf_referrals 00:08:51.023 ************************************ 00:08:51.023 22:35:35 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:51.023 22:35:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:51.023 22:35:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:51.023 22:35:35 -- common/autotest_common.sh@10 -- # set +x 00:08:51.023 ************************************ 00:08:51.023 START TEST nvmf_connect_disconnect 00:08:51.023 ************************************ 00:08:51.023 22:35:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:51.023 * Looking for test storage... 00:08:51.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.023 22:35:35 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.023 22:35:35 -- nvmf/common.sh@7 -- # uname -s 00:08:51.023 22:35:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.023 22:35:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.023 22:35:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.023 22:35:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.023 22:35:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.023 22:35:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.023 22:35:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.023 22:35:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.023 22:35:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.023 22:35:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.023 22:35:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:51.023 22:35:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:51.023 22:35:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.023 22:35:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.023 22:35:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.023 22:35:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.023 22:35:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.023 22:35:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.023 22:35:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.023 22:35:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.023 22:35:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.023 22:35:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.023 22:35:35 -- paths/export.sh@5 -- # export PATH 00:08:51.023 22:35:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.023 22:35:35 -- nvmf/common.sh@46 -- # : 0 00:08:51.023 22:35:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:51.023 22:35:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:51.023 22:35:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:51.023 22:35:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.023 22:35:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.023 22:35:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:51.023 22:35:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:51.023 22:35:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:51.023 22:35:35 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:51.023 22:35:35 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:51.023 22:35:35 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:51.023 22:35:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:51.023 22:35:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.023 22:35:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:51.023 22:35:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:51.023 22:35:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:51.023 22:35:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.023 22:35:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.023 22:35:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.023 22:35:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:51.023 22:35:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:51.023 22:35:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:51.023 22:35:35 -- common/autotest_common.sh@10 -- # set +x 00:08:59.175 22:35:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:59.175 22:35:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:59.175 22:35:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:59.175 22:35:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:59.175 22:35:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:59.175 22:35:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:59.175 22:35:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:59.175 22:35:42 -- nvmf/common.sh@294 -- # net_devs=() 00:08:59.175 22:35:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:59.175 22:35:42 -- nvmf/common.sh@295 -- # e810=() 00:08:59.175 22:35:42 -- nvmf/common.sh@295 -- # local -ga e810 00:08:59.175 22:35:42 -- nvmf/common.sh@296 -- # x722=() 00:08:59.175 22:35:42 -- nvmf/common.sh@296 -- # local -ga x722 00:08:59.175 22:35:42 -- nvmf/common.sh@297 -- # mlx=() 00:08:59.175 22:35:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:59.175 22:35:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.175 22:35:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.175 22:35:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.175 22:35:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.175 22:35:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.175 22:35:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.175 22:35:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.175 22:35:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.175 22:35:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.175 22:35:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.175 22:35:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.175 22:35:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:59.175 22:35:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:59.175 22:35:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:59.175 22:35:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:59.175 22:35:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:59.176 22:35:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:59.176 22:35:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:59.176 22:35:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:59.176 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:59.176 22:35:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:59.176 22:35:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:59.176 22:35:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.176 22:35:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.176 22:35:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:59.176 22:35:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:59.176 22:35:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:59.176 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:59.176 22:35:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:59.176 22:35:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:59.176 22:35:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.176 22:35:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.176 22:35:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:59.176 22:35:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:59.176 22:35:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:59.176 22:35:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:59.176 22:35:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:59.176 22:35:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.176 22:35:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:59.176 22:35:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.176 22:35:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:59.176 Found net devices under 0000:31:00.0: cvl_0_0 00:08:59.176 22:35:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.176 22:35:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:59.176 22:35:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.176 22:35:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:59.176 22:35:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.176 22:35:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:59.176 Found net devices under 0000:31:00.1: cvl_0_1 00:08:59.176 22:35:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.176 22:35:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:59.176 22:35:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:59.176 22:35:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:59.176 22:35:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:59.176 22:35:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:59.176 22:35:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.176 22:35:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.176 22:35:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.176 22:35:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:59.176 22:35:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.176 22:35:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.176 22:35:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:59.176 22:35:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.176 22:35:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.176 22:35:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:59.176 22:35:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:59.176 22:35:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.176 22:35:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.176 22:35:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.176 22:35:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.176 22:35:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:59.176 22:35:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.176 22:35:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.176 22:35:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.176 22:35:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:59.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:08:59.176 00:08:59.176 --- 10.0.0.2 ping statistics --- 00:08:59.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.176 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:08:59.176 22:35:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.361 ms 00:08:59.176 00:08:59.176 --- 10.0.0.1 ping statistics --- 00:08:59.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.176 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:08:59.176 22:35:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.176 22:35:43 -- nvmf/common.sh@410 -- # return 0 00:08:59.176 22:35:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:59.176 22:35:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.176 22:35:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:59.176 22:35:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:59.176 22:35:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.176 22:35:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:59.176 22:35:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:59.176 22:35:43 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:59.176 22:35:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:59.176 22:35:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:59.176 22:35:43 -- common/autotest_common.sh@10 -- # set +x 00:08:59.176 22:35:43 -- nvmf/common.sh@469 -- # nvmfpid=950078 00:08:59.176 22:35:43 -- nvmf/common.sh@470 -- # waitforlisten 950078 00:08:59.176 22:35:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:59.176 22:35:43 -- common/autotest_common.sh@819 -- # '[' -z 950078 ']' 00:08:59.176 22:35:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.176 22:35:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:59.176 22:35:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.176 22:35:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:59.176 22:35:43 -- common/autotest_common.sh@10 -- # set +x 00:08:59.176 [2024-04-15 22:35:43.443249] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:59.176 [2024-04-15 22:35:43.443312] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.176 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.176 [2024-04-15 22:35:43.521420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.176 [2024-04-15 22:35:43.593721] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:59.176 [2024-04-15 22:35:43.593860] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.176 [2024-04-15 22:35:43.593870] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.176 [2024-04-15 22:35:43.593878] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.176 [2024-04-15 22:35:43.594028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.176 [2024-04-15 22:35:43.594148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.176 [2024-04-15 22:35:43.594312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.176 [2024-04-15 22:35:43.594313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.437 22:35:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:59.437 22:35:44 -- common/autotest_common.sh@852 -- # return 0 00:08:59.437 22:35:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:59.437 22:35:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:59.437 22:35:44 -- common/autotest_common.sh@10 -- # set +x 00:08:59.698 22:35:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.698 22:35:44 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:59.698 22:35:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.698 22:35:44 -- common/autotest_common.sh@10 -- # set +x 00:08:59.698 [2024-04-15 22:35:44.258652] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.698 22:35:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.698 22:35:44 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:59.698 22:35:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.698 22:35:44 -- common/autotest_common.sh@10 -- # set +x 00:08:59.698 22:35:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.698 22:35:44 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:59.698 22:35:44 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:59.698 22:35:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.698 22:35:44 -- common/autotest_common.sh@10 -- # set +x 00:08:59.698 22:35:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.698 22:35:44 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:59.698 22:35:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.698 22:35:44 -- common/autotest_common.sh@10 -- # set +x 00:08:59.698 22:35:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.698 22:35:44 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.698 22:35:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.698 22:35:44 -- common/autotest_common.sh@10 -- # set +x 00:08:59.698 [2024-04-15 22:35:44.317999] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.698 22:35:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.698 22:35:44 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:59.698 22:35:44 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:59.698 22:35:44 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:59.698 22:35:44 -- target/connect_disconnect.sh@34 -- # set +x 00:09:02.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.449 22:39:35 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:51.449 22:39:35 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:51.449 22:39:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:51.449 22:39:35 -- nvmf/common.sh@116 -- # sync 00:12:51.449 22:39:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:51.449 22:39:35 -- nvmf/common.sh@119 -- # set +e 00:12:51.449 22:39:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:51.449 22:39:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:51.449 rmmod nvme_tcp 00:12:51.449 rmmod nvme_fabrics 00:12:51.449 rmmod nvme_keyring 00:12:51.449 22:39:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:51.449 22:39:35 -- nvmf/common.sh@123 -- # set -e 00:12:51.449 22:39:35 -- nvmf/common.sh@124 -- # return 0 00:12:51.449 22:39:35 -- nvmf/common.sh@477 -- # '[' -n 950078 ']' 00:12:51.449 22:39:35 -- nvmf/common.sh@478 -- # killprocess 950078 00:12:51.449 22:39:35 -- common/autotest_common.sh@926 -- # '[' -z 950078 ']' 00:12:51.449 22:39:35 -- common/autotest_common.sh@930 -- # kill -0 950078 00:12:51.449 22:39:35 -- common/autotest_common.sh@931 -- # uname 00:12:51.449 22:39:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:51.449 22:39:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 950078 00:12:51.449 22:39:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:51.449 22:39:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:51.449 22:39:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 950078' 00:12:51.449 killing process with pid 950078 00:12:51.449 22:39:35 -- common/autotest_common.sh@945 -- # kill 950078 00:12:51.449 22:39:35 -- common/autotest_common.sh@950 -- # wait 950078 00:12:51.449 22:39:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:51.449 22:39:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:51.449 22:39:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:51.449 22:39:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:51.449 22:39:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:51.449 22:39:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.449 22:39:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.449 22:39:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.361 22:39:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:53.361 00:12:53.361 real 4m2.875s 00:12:53.361 user 15m23.995s 00:12:53.361 sys 0m22.775s 00:12:53.361 22:39:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:53.361 22:39:38 -- common/autotest_common.sh@10 -- # set +x 00:12:53.361 ************************************ 00:12:53.361 END TEST nvmf_connect_disconnect 00:12:53.361 ************************************ 00:12:53.621 22:39:38 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:53.621 22:39:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:53.621 22:39:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:53.621 22:39:38 -- common/autotest_common.sh@10 -- # set +x 00:12:53.621 ************************************ 00:12:53.621 START TEST nvmf_multitarget 00:12:53.621 ************************************ 00:12:53.621 22:39:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:53.621 * Looking for test storage... 00:12:53.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:53.621 22:39:38 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.621 22:39:38 -- nvmf/common.sh@7 -- # uname -s 00:12:53.621 22:39:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.621 22:39:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.621 22:39:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.621 22:39:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.621 22:39:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.621 22:39:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.621 22:39:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.621 22:39:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.621 22:39:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.621 22:39:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.622 22:39:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:53.622 22:39:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:53.622 22:39:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.622 22:39:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.622 22:39:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.622 22:39:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:53.622 22:39:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.622 22:39:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.622 22:39:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.622 22:39:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.622 22:39:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.622 22:39:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.622 22:39:38 -- paths/export.sh@5 -- # export PATH 00:12:53.622 22:39:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.622 22:39:38 -- nvmf/common.sh@46 -- # : 0 00:12:53.622 22:39:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:53.622 22:39:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:53.622 22:39:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:53.622 22:39:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.622 22:39:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.622 22:39:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:53.622 22:39:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:53.622 22:39:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:53.622 22:39:38 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:53.622 22:39:38 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:53.622 22:39:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:53.622 22:39:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.622 22:39:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:53.622 22:39:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:53.622 22:39:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:53.622 22:39:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.622 22:39:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.622 22:39:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.622 22:39:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:53.622 22:39:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:53.622 22:39:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:53.622 22:39:38 -- common/autotest_common.sh@10 -- # set +x 00:13:01.762 22:39:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:01.762 22:39:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:01.762 22:39:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:01.762 22:39:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:01.762 22:39:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:01.762 22:39:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:01.762 22:39:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:01.762 22:39:45 -- nvmf/common.sh@294 -- # net_devs=() 00:13:01.762 22:39:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:01.762 22:39:45 -- nvmf/common.sh@295 -- # e810=() 00:13:01.762 22:39:45 -- nvmf/common.sh@295 -- # local -ga e810 00:13:01.762 22:39:45 -- nvmf/common.sh@296 -- # x722=() 00:13:01.762 22:39:45 -- nvmf/common.sh@296 -- # local -ga x722 00:13:01.762 22:39:45 -- nvmf/common.sh@297 -- # mlx=() 00:13:01.762 22:39:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:01.762 22:39:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.762 22:39:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.762 22:39:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.762 22:39:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.762 22:39:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.762 22:39:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.762 22:39:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.762 22:39:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.762 22:39:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.762 22:39:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.762 22:39:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.762 22:39:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:01.762 22:39:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:01.762 22:39:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:01.762 22:39:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:01.762 22:39:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:01.762 22:39:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:01.762 22:39:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:01.762 22:39:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:01.762 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:01.762 22:39:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:01.762 22:39:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:01.762 22:39:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.762 22:39:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.762 22:39:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:01.762 22:39:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:01.762 22:39:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:01.762 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:01.762 22:39:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:01.762 22:39:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:01.762 22:39:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.762 22:39:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.762 22:39:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:01.762 22:39:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:01.762 22:39:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:01.762 22:39:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:01.762 22:39:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:01.762 22:39:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.762 22:39:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:01.762 22:39:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.762 22:39:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:01.762 Found net devices under 0000:31:00.0: cvl_0_0 00:13:01.762 22:39:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.762 22:39:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:01.762 22:39:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.762 22:39:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:01.762 22:39:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.762 22:39:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:01.762 Found net devices under 0000:31:00.1: cvl_0_1 00:13:01.762 22:39:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.762 22:39:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:01.762 22:39:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:01.762 22:39:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:01.762 22:39:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:01.762 22:39:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:01.762 22:39:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.762 22:39:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.762 22:39:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:01.762 22:39:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:01.762 22:39:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:01.762 22:39:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:01.762 22:39:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:01.762 22:39:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:01.763 22:39:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.763 22:39:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:01.763 22:39:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:01.763 22:39:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:01.763 22:39:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.763 22:39:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.763 22:39:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.763 22:39:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:01.763 22:39:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:01.763 22:39:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:01.763 22:39:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:01.763 22:39:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:01.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:13:01.763 00:13:01.763 --- 10.0.0.2 ping statistics --- 00:13:01.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.763 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:13:01.763 22:39:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:01.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:13:01.763 00:13:01.763 --- 10.0.0.1 ping statistics --- 00:13:01.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.763 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:13:01.763 22:39:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.763 22:39:45 -- nvmf/common.sh@410 -- # return 0 00:13:01.763 22:39:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:01.763 22:39:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.763 22:39:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:01.763 22:39:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:01.763 22:39:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.763 22:39:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:01.763 22:39:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:01.763 22:39:45 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:01.763 22:39:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:01.763 22:39:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:01.763 22:39:45 -- common/autotest_common.sh@10 -- # set +x 00:13:01.763 22:39:45 -- nvmf/common.sh@469 -- # nvmfpid=1003294 00:13:01.763 22:39:45 -- nvmf/common.sh@470 -- # waitforlisten 1003294 00:13:01.763 22:39:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:01.763 22:39:45 -- common/autotest_common.sh@819 -- # '[' -z 1003294 ']' 00:13:01.763 22:39:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.763 22:39:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:01.763 22:39:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.763 22:39:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:01.763 22:39:45 -- common/autotest_common.sh@10 -- # set +x 00:13:01.763 [2024-04-15 22:39:45.999435] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:13:01.763 [2024-04-15 22:39:45.999489] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.763 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.763 [2024-04-15 22:39:46.072858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:01.763 [2024-04-15 22:39:46.136365] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:01.763 [2024-04-15 22:39:46.136498] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.763 [2024-04-15 22:39:46.136508] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.763 [2024-04-15 22:39:46.136516] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.763 [2024-04-15 22:39:46.136566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.763 [2024-04-15 22:39:46.136724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.763 [2024-04-15 22:39:46.136725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:01.763 [2024-04-15 22:39:46.136587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.022 22:39:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:02.022 22:39:46 -- common/autotest_common.sh@852 -- # return 0 00:13:02.022 22:39:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:02.022 22:39:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:02.022 22:39:46 -- common/autotest_common.sh@10 -- # set +x 00:13:02.022 22:39:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.022 22:39:46 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:02.022 22:39:46 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:02.022 22:39:46 -- target/multitarget.sh@21 -- # jq length 00:13:02.282 22:39:46 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:02.282 22:39:46 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:02.282 "nvmf_tgt_1" 00:13:02.282 22:39:47 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:02.542 "nvmf_tgt_2" 00:13:02.542 22:39:47 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:02.542 22:39:47 -- target/multitarget.sh@28 -- # jq length 00:13:02.542 22:39:47 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:02.542 22:39:47 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:02.542 true 00:13:02.542 22:39:47 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:02.802 true 00:13:02.802 22:39:47 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:02.802 22:39:47 -- target/multitarget.sh@35 -- # jq length 00:13:02.802 22:39:47 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:02.802 22:39:47 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:02.802 22:39:47 -- target/multitarget.sh@41 -- # nvmftestfini 00:13:02.802 22:39:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:02.802 22:39:47 -- nvmf/common.sh@116 -- # sync 00:13:02.802 22:39:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:02.802 22:39:47 -- nvmf/common.sh@119 -- # set +e 00:13:02.802 22:39:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:02.802 22:39:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:02.802 rmmod nvme_tcp 00:13:02.802 rmmod nvme_fabrics 00:13:02.802 rmmod nvme_keyring 00:13:02.802 22:39:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:02.802 22:39:47 -- nvmf/common.sh@123 -- # set -e 00:13:02.802 22:39:47 -- nvmf/common.sh@124 -- # return 0 00:13:02.802 22:39:47 -- nvmf/common.sh@477 -- # '[' -n 1003294 ']' 00:13:02.802 22:39:47 -- nvmf/common.sh@478 -- # killprocess 1003294 00:13:02.802 22:39:47 -- common/autotest_common.sh@926 -- # '[' -z 1003294 ']' 00:13:02.802 22:39:47 -- common/autotest_common.sh@930 -- # kill -0 1003294 00:13:02.802 22:39:47 -- common/autotest_common.sh@931 -- # uname 00:13:02.802 22:39:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:02.802 22:39:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1003294 00:13:03.062 22:39:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:03.062 22:39:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:03.062 22:39:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1003294' 00:13:03.062 killing process with pid 1003294 00:13:03.062 22:39:47 -- common/autotest_common.sh@945 -- # kill 1003294 00:13:03.062 22:39:47 -- common/autotest_common.sh@950 -- # wait 1003294 00:13:03.062 22:39:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:03.062 22:39:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:03.062 22:39:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:03.062 22:39:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:03.062 22:39:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:03.062 22:39:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.062 22:39:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.062 22:39:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.605 22:39:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:05.605 00:13:05.605 real 0m11.644s 00:13:05.605 user 0m9.460s 00:13:05.605 sys 0m5.977s 00:13:05.605 22:39:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:05.605 22:39:49 -- common/autotest_common.sh@10 -- # set +x 00:13:05.605 ************************************ 00:13:05.605 END TEST nvmf_multitarget 00:13:05.605 ************************************ 00:13:05.605 22:39:49 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:05.605 22:39:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:05.605 22:39:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:05.605 22:39:49 -- common/autotest_common.sh@10 -- # set +x 00:13:05.605 ************************************ 00:13:05.605 START TEST nvmf_rpc 00:13:05.605 ************************************ 00:13:05.605 22:39:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:05.605 * Looking for test storage... 00:13:05.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:05.605 22:39:49 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:05.605 22:39:49 -- nvmf/common.sh@7 -- # uname -s 00:13:05.605 22:39:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.605 22:39:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.605 22:39:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.605 22:39:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.605 22:39:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.605 22:39:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.605 22:39:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.605 22:39:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.605 22:39:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.605 22:39:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.605 22:39:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:05.605 22:39:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:05.605 22:39:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.605 22:39:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.605 22:39:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:05.605 22:39:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:05.605 22:39:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.605 22:39:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.605 22:39:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.606 22:39:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.606 22:39:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.606 22:39:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.606 22:39:50 -- paths/export.sh@5 -- # export PATH 00:13:05.606 22:39:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.606 22:39:50 -- nvmf/common.sh@46 -- # : 0 00:13:05.606 22:39:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:05.606 22:39:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:05.606 22:39:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:05.606 22:39:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.606 22:39:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.606 22:39:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:05.606 22:39:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:05.606 22:39:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:05.606 22:39:50 -- target/rpc.sh@11 -- # loops=5 00:13:05.606 22:39:50 -- target/rpc.sh@23 -- # nvmftestinit 00:13:05.606 22:39:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:05.606 22:39:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.606 22:39:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:05.606 22:39:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:05.606 22:39:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:05.606 22:39:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.606 22:39:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.606 22:39:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.606 22:39:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:05.606 22:39:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:05.606 22:39:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:05.606 22:39:50 -- common/autotest_common.sh@10 -- # set +x 00:13:13.835 22:39:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:13.835 22:39:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:13.835 22:39:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:13.835 22:39:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:13.835 22:39:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:13.835 22:39:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:13.835 22:39:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:13.835 22:39:57 -- nvmf/common.sh@294 -- # net_devs=() 00:13:13.835 22:39:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:13.835 22:39:57 -- nvmf/common.sh@295 -- # e810=() 00:13:13.835 22:39:57 -- nvmf/common.sh@295 -- # local -ga e810 00:13:13.835 22:39:57 -- nvmf/common.sh@296 -- # x722=() 00:13:13.835 22:39:57 -- nvmf/common.sh@296 -- # local -ga x722 00:13:13.835 22:39:57 -- nvmf/common.sh@297 -- # mlx=() 00:13:13.835 22:39:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:13.835 22:39:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:13.835 22:39:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:13.835 22:39:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:13.835 22:39:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:13.835 22:39:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:13.835 22:39:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:13.835 22:39:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:13.835 22:39:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:13.835 22:39:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:13.835 22:39:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:13.835 22:39:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:13.835 22:39:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:13.835 22:39:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:13.835 22:39:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:13.835 22:39:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:13.835 22:39:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:13.835 22:39:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:13.835 22:39:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:13.835 22:39:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:13.835 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:13.835 22:39:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:13.835 22:39:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:13.835 22:39:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.835 22:39:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.835 22:39:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:13.835 22:39:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:13.835 22:39:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:13.835 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:13.835 22:39:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:13.835 22:39:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:13.835 22:39:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.835 22:39:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.835 22:39:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:13.835 22:39:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:13.835 22:39:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:13.835 22:39:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:13.835 22:39:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:13.835 22:39:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.835 22:39:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:13.835 22:39:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.835 22:39:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:13.835 Found net devices under 0000:31:00.0: cvl_0_0 00:13:13.835 22:39:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.835 22:39:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:13.835 22:39:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.835 22:39:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:13.835 22:39:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.835 22:39:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:13.835 Found net devices under 0000:31:00.1: cvl_0_1 00:13:13.835 22:39:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.835 22:39:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:13.835 22:39:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:13.835 22:39:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:13.835 22:39:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:13.835 22:39:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:13.835 22:39:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.835 22:39:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.835 22:39:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:13.835 22:39:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:13.835 22:39:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:13.835 22:39:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:13.835 22:39:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:13.835 22:39:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:13.835 22:39:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.835 22:39:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:13.835 22:39:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:13.835 22:39:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:13.835 22:39:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:13.835 22:39:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:13.835 22:39:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:13.835 22:39:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:13.835 22:39:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:13.835 22:39:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:13.835 22:39:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:13.835 22:39:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:13.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:13:13.835 00:13:13.835 --- 10.0.0.2 ping statistics --- 00:13:13.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.835 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:13:13.835 22:39:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:13.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:13:13.835 00:13:13.835 --- 10.0.0.1 ping statistics --- 00:13:13.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.835 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:13:13.835 22:39:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.835 22:39:57 -- nvmf/common.sh@410 -- # return 0 00:13:13.835 22:39:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:13.835 22:39:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.835 22:39:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:13.835 22:39:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:13.835 22:39:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.835 22:39:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:13.835 22:39:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:13.835 22:39:57 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:13.835 22:39:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:13.835 22:39:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:13.835 22:39:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.835 22:39:57 -- nvmf/common.sh@469 -- # nvmfpid=1008325 00:13:13.835 22:39:57 -- nvmf/common.sh@470 -- # waitforlisten 1008325 00:13:13.835 22:39:57 -- common/autotest_common.sh@819 -- # '[' -z 1008325 ']' 00:13:13.835 22:39:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.835 22:39:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:13.835 22:39:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.835 22:39:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:13.835 22:39:57 -- common/autotest_common.sh@10 -- # set +x 00:13:13.835 22:39:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:13.835 [2024-04-15 22:39:57.802905] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:13:13.836 [2024-04-15 22:39:57.802998] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.836 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.836 [2024-04-15 22:39:57.884960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:13.836 [2024-04-15 22:39:57.957727] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:13.836 [2024-04-15 22:39:57.957866] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.836 [2024-04-15 22:39:57.957875] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.836 [2024-04-15 22:39:57.957883] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.836 [2024-04-15 22:39:57.958038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.836 [2024-04-15 22:39:57.958161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.836 [2024-04-15 22:39:57.958320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.836 [2024-04-15 22:39:57.958321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.836 22:39:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:13.836 22:39:58 -- common/autotest_common.sh@852 -- # return 0 00:13:13.836 22:39:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:13.836 22:39:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:13.836 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:13:13.836 22:39:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.836 22:39:58 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:13.836 22:39:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.836 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:13:13.836 22:39:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.836 22:39:58 -- target/rpc.sh@26 -- # stats='{ 00:13:13.836 "tick_rate": 2400000000, 00:13:13.836 "poll_groups": [ 00:13:13.836 { 00:13:13.836 "name": "nvmf_tgt_poll_group_0", 00:13:13.836 "admin_qpairs": 0, 00:13:13.836 "io_qpairs": 0, 00:13:13.836 "current_admin_qpairs": 0, 00:13:13.836 "current_io_qpairs": 0, 00:13:13.836 "pending_bdev_io": 0, 00:13:13.836 "completed_nvme_io": 0, 00:13:13.836 "transports": [] 00:13:13.836 }, 00:13:13.836 { 00:13:13.836 "name": "nvmf_tgt_poll_group_1", 00:13:13.836 "admin_qpairs": 0, 00:13:13.836 "io_qpairs": 0, 00:13:13.836 "current_admin_qpairs": 0, 00:13:13.836 "current_io_qpairs": 0, 00:13:13.836 "pending_bdev_io": 0, 00:13:13.836 "completed_nvme_io": 0, 00:13:13.836 "transports": [] 00:13:13.836 }, 00:13:13.836 { 00:13:13.836 "name": "nvmf_tgt_poll_group_2", 00:13:13.836 "admin_qpairs": 0, 00:13:13.836 "io_qpairs": 0, 00:13:13.836 "current_admin_qpairs": 0, 00:13:13.836 "current_io_qpairs": 0, 00:13:13.836 "pending_bdev_io": 0, 00:13:13.836 "completed_nvme_io": 0, 00:13:13.836 "transports": [] 00:13:13.836 }, 00:13:13.836 { 00:13:13.836 "name": "nvmf_tgt_poll_group_3", 00:13:13.836 "admin_qpairs": 0, 00:13:13.836 "io_qpairs": 0, 00:13:13.836 "current_admin_qpairs": 0, 00:13:13.836 "current_io_qpairs": 0, 00:13:13.836 "pending_bdev_io": 0, 00:13:13.836 "completed_nvme_io": 0, 00:13:13.836 "transports": [] 00:13:13.836 } 00:13:13.836 ] 00:13:13.836 }' 00:13:13.836 22:39:58 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:13.836 22:39:58 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:14.097 22:39:58 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:14.097 22:39:58 -- target/rpc.sh@15 -- # wc -l 00:13:14.097 22:39:58 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:14.097 22:39:58 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:14.097 22:39:58 -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:14.097 22:39:58 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:14.097 22:39:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.097 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:13:14.097 [2024-04-15 22:39:58.739132] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.097 22:39:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.097 22:39:58 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:14.097 22:39:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.097 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:13:14.097 22:39:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.097 22:39:58 -- target/rpc.sh@33 -- # stats='{ 00:13:14.097 "tick_rate": 2400000000, 00:13:14.097 "poll_groups": [ 00:13:14.097 { 00:13:14.097 "name": "nvmf_tgt_poll_group_0", 00:13:14.097 "admin_qpairs": 0, 00:13:14.097 "io_qpairs": 0, 00:13:14.097 "current_admin_qpairs": 0, 00:13:14.097 "current_io_qpairs": 0, 00:13:14.097 "pending_bdev_io": 0, 00:13:14.097 "completed_nvme_io": 0, 00:13:14.097 "transports": [ 00:13:14.097 { 00:13:14.097 "trtype": "TCP" 00:13:14.097 } 00:13:14.097 ] 00:13:14.097 }, 00:13:14.097 { 00:13:14.097 "name": "nvmf_tgt_poll_group_1", 00:13:14.097 "admin_qpairs": 0, 00:13:14.097 "io_qpairs": 0, 00:13:14.097 "current_admin_qpairs": 0, 00:13:14.097 "current_io_qpairs": 0, 00:13:14.097 "pending_bdev_io": 0, 00:13:14.097 "completed_nvme_io": 0, 00:13:14.097 "transports": [ 00:13:14.097 { 00:13:14.097 "trtype": "TCP" 00:13:14.097 } 00:13:14.097 ] 00:13:14.097 }, 00:13:14.097 { 00:13:14.097 "name": "nvmf_tgt_poll_group_2", 00:13:14.097 "admin_qpairs": 0, 00:13:14.097 "io_qpairs": 0, 00:13:14.097 "current_admin_qpairs": 0, 00:13:14.097 "current_io_qpairs": 0, 00:13:14.097 "pending_bdev_io": 0, 00:13:14.097 "completed_nvme_io": 0, 00:13:14.097 "transports": [ 00:13:14.097 { 00:13:14.097 "trtype": "TCP" 00:13:14.097 } 00:13:14.097 ] 00:13:14.097 }, 00:13:14.097 { 00:13:14.097 "name": "nvmf_tgt_poll_group_3", 00:13:14.097 "admin_qpairs": 0, 00:13:14.097 "io_qpairs": 0, 00:13:14.097 "current_admin_qpairs": 0, 00:13:14.097 "current_io_qpairs": 0, 00:13:14.097 "pending_bdev_io": 0, 00:13:14.097 "completed_nvme_io": 0, 00:13:14.097 "transports": [ 00:13:14.097 { 00:13:14.097 "trtype": "TCP" 00:13:14.097 } 00:13:14.097 ] 00:13:14.097 } 00:13:14.097 ] 00:13:14.097 }' 00:13:14.097 22:39:58 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:14.097 22:39:58 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:14.097 22:39:58 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:14.097 22:39:58 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:14.097 22:39:58 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:14.097 22:39:58 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:14.097 22:39:58 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:14.097 22:39:58 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:14.097 22:39:58 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:14.097 22:39:58 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:14.097 22:39:58 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:14.097 22:39:58 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:14.097 22:39:58 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:14.097 22:39:58 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:14.097 22:39:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.097 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:13:14.097 Malloc1 00:13:14.097 22:39:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.097 22:39:58 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:14.097 22:39:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.097 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:13:14.097 22:39:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.097 22:39:58 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:14.097 22:39:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.097 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:13:14.097 22:39:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.097 22:39:58 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:14.097 22:39:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.097 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:13:14.097 22:39:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.097 22:39:58 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.097 22:39:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.097 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:13:14.097 [2024-04-15 22:39:58.902870] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.359 22:39:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.359 22:39:58 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:14.359 22:39:58 -- common/autotest_common.sh@640 -- # local es=0 00:13:14.359 22:39:58 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:14.359 22:39:58 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:14.359 22:39:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:14.359 22:39:58 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:14.359 22:39:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:14.359 22:39:58 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:14.359 22:39:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:14.359 22:39:58 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:14.359 22:39:58 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:14.359 22:39:58 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:14.359 [2024-04-15 22:39:58.933610] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:13:14.359 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:14.359 could not add new controller: failed to write to nvme-fabrics device 00:13:14.359 22:39:58 -- common/autotest_common.sh@643 -- # es=1 00:13:14.359 22:39:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:14.359 22:39:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:14.359 22:39:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:14.359 22:39:58 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:14.359 22:39:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.359 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:13:14.359 22:39:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.359 22:39:58 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.746 22:40:00 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:15.747 22:40:00 -- common/autotest_common.sh@1177 -- # local i=0 00:13:15.747 22:40:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.747 22:40:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:15.747 22:40:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:18.297 22:40:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:18.297 22:40:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:18.297 22:40:02 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:18.297 22:40:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:18.297 22:40:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.297 22:40:02 -- common/autotest_common.sh@1187 -- # return 0 00:13:18.297 22:40:02 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.297 22:40:02 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:18.297 22:40:02 -- common/autotest_common.sh@1198 -- # local i=0 00:13:18.297 22:40:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:18.297 22:40:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.297 22:40:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:18.297 22:40:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.297 22:40:02 -- common/autotest_common.sh@1210 -- # return 0 00:13:18.297 22:40:02 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:18.297 22:40:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.297 22:40:02 -- common/autotest_common.sh@10 -- # set +x 00:13:18.297 22:40:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.297 22:40:02 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.297 22:40:02 -- common/autotest_common.sh@640 -- # local es=0 00:13:18.297 22:40:02 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.297 22:40:02 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:18.297 22:40:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:18.297 22:40:02 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:18.297 22:40:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:18.297 22:40:02 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:18.297 22:40:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:18.297 22:40:02 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:18.297 22:40:02 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:18.297 22:40:02 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.297 [2024-04-15 22:40:02.690851] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:13:18.297 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:18.297 could not add new controller: failed to write to nvme-fabrics device 00:13:18.297 22:40:02 -- common/autotest_common.sh@643 -- # es=1 00:13:18.297 22:40:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:18.297 22:40:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:18.297 22:40:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:18.297 22:40:02 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:18.297 22:40:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.297 22:40:02 -- common/autotest_common.sh@10 -- # set +x 00:13:18.297 22:40:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.297 22:40:02 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.684 22:40:04 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:19.684 22:40:04 -- common/autotest_common.sh@1177 -- # local i=0 00:13:19.684 22:40:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.684 22:40:04 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:19.684 22:40:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:21.601 22:40:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:21.601 22:40:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:21.601 22:40:06 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.601 22:40:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:21.601 22:40:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.601 22:40:06 -- common/autotest_common.sh@1187 -- # return 0 00:13:21.601 22:40:06 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.601 22:40:06 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.601 22:40:06 -- common/autotest_common.sh@1198 -- # local i=0 00:13:21.601 22:40:06 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:21.601 22:40:06 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.601 22:40:06 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:21.601 22:40:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.601 22:40:06 -- common/autotest_common.sh@1210 -- # return 0 00:13:21.601 22:40:06 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.601 22:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.601 22:40:06 -- common/autotest_common.sh@10 -- # set +x 00:13:21.862 22:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.862 22:40:06 -- target/rpc.sh@81 -- # seq 1 5 00:13:21.862 22:40:06 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:21.862 22:40:06 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:21.862 22:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.862 22:40:06 -- common/autotest_common.sh@10 -- # set +x 00:13:21.862 22:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.862 22:40:06 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.862 22:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.862 22:40:06 -- common/autotest_common.sh@10 -- # set +x 00:13:21.862 [2024-04-15 22:40:06.441901] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.862 22:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.862 22:40:06 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:21.862 22:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.862 22:40:06 -- common/autotest_common.sh@10 -- # set +x 00:13:21.862 22:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.862 22:40:06 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:21.862 22:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.862 22:40:06 -- common/autotest_common.sh@10 -- # set +x 00:13:21.862 22:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.862 22:40:06 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:23.245 22:40:08 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:23.245 22:40:08 -- common/autotest_common.sh@1177 -- # local i=0 00:13:23.245 22:40:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:23.245 22:40:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:23.245 22:40:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:25.791 22:40:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:25.791 22:40:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:25.791 22:40:10 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:25.791 22:40:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:25.791 22:40:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:25.791 22:40:10 -- common/autotest_common.sh@1187 -- # return 0 00:13:25.791 22:40:10 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.791 22:40:10 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:25.791 22:40:10 -- common/autotest_common.sh@1198 -- # local i=0 00:13:25.791 22:40:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:25.791 22:40:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.791 22:40:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:25.791 22:40:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.791 22:40:10 -- common/autotest_common.sh@1210 -- # return 0 00:13:25.791 22:40:10 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:25.791 22:40:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.791 22:40:10 -- common/autotest_common.sh@10 -- # set +x 00:13:25.791 22:40:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.791 22:40:10 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.791 22:40:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.791 22:40:10 -- common/autotest_common.sh@10 -- # set +x 00:13:25.791 22:40:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.791 22:40:10 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:25.791 22:40:10 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:25.791 22:40:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.791 22:40:10 -- common/autotest_common.sh@10 -- # set +x 00:13:25.791 22:40:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.791 22:40:10 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.791 22:40:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.791 22:40:10 -- common/autotest_common.sh@10 -- # set +x 00:13:25.791 [2024-04-15 22:40:10.199160] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.791 22:40:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.791 22:40:10 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:25.791 22:40:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.791 22:40:10 -- common/autotest_common.sh@10 -- # set +x 00:13:25.791 22:40:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.791 22:40:10 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:25.791 22:40:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.791 22:40:10 -- common/autotest_common.sh@10 -- # set +x 00:13:25.791 22:40:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.791 22:40:10 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.178 22:40:11 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:27.178 22:40:11 -- common/autotest_common.sh@1177 -- # local i=0 00:13:27.178 22:40:11 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:27.178 22:40:11 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:27.178 22:40:11 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:29.095 22:40:13 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:29.095 22:40:13 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:29.095 22:40:13 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:29.095 22:40:13 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:29.095 22:40:13 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:29.095 22:40:13 -- common/autotest_common.sh@1187 -- # return 0 00:13:29.095 22:40:13 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:29.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.095 22:40:13 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:29.095 22:40:13 -- common/autotest_common.sh@1198 -- # local i=0 00:13:29.095 22:40:13 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:29.095 22:40:13 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:29.095 22:40:13 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:29.095 22:40:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:29.095 22:40:13 -- common/autotest_common.sh@1210 -- # return 0 00:13:29.095 22:40:13 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:29.095 22:40:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.095 22:40:13 -- common/autotest_common.sh@10 -- # set +x 00:13:29.095 22:40:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.095 22:40:13 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:29.095 22:40:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.095 22:40:13 -- common/autotest_common.sh@10 -- # set +x 00:13:29.095 22:40:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.095 22:40:13 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:29.095 22:40:13 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:29.095 22:40:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.095 22:40:13 -- common/autotest_common.sh@10 -- # set +x 00:13:29.356 22:40:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.356 22:40:13 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.356 22:40:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.356 22:40:13 -- common/autotest_common.sh@10 -- # set +x 00:13:29.356 [2024-04-15 22:40:13.913672] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.356 22:40:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.356 22:40:13 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:29.356 22:40:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.356 22:40:13 -- common/autotest_common.sh@10 -- # set +x 00:13:29.356 22:40:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.356 22:40:13 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:29.356 22:40:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.356 22:40:13 -- common/autotest_common.sh@10 -- # set +x 00:13:29.356 22:40:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.356 22:40:13 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:30.743 22:40:15 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:30.743 22:40:15 -- common/autotest_common.sh@1177 -- # local i=0 00:13:30.743 22:40:15 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:30.743 22:40:15 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:30.743 22:40:15 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:32.658 22:40:17 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:32.658 22:40:17 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:32.658 22:40:17 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:32.658 22:40:17 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:32.658 22:40:17 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:32.658 22:40:17 -- common/autotest_common.sh@1187 -- # return 0 00:13:32.658 22:40:17 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.919 22:40:17 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:32.919 22:40:17 -- common/autotest_common.sh@1198 -- # local i=0 00:13:32.919 22:40:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:32.919 22:40:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.919 22:40:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:32.919 22:40:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.919 22:40:17 -- common/autotest_common.sh@1210 -- # return 0 00:13:32.919 22:40:17 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:32.919 22:40:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.919 22:40:17 -- common/autotest_common.sh@10 -- # set +x 00:13:32.919 22:40:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.919 22:40:17 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.919 22:40:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.919 22:40:17 -- common/autotest_common.sh@10 -- # set +x 00:13:32.919 22:40:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.919 22:40:17 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:32.919 22:40:17 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:32.919 22:40:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.919 22:40:17 -- common/autotest_common.sh@10 -- # set +x 00:13:32.919 22:40:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.919 22:40:17 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.919 22:40:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.919 22:40:17 -- common/autotest_common.sh@10 -- # set +x 00:13:32.919 [2024-04-15 22:40:17.619525] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.919 22:40:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.919 22:40:17 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:32.919 22:40:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.919 22:40:17 -- common/autotest_common.sh@10 -- # set +x 00:13:32.919 22:40:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.919 22:40:17 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:32.919 22:40:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.919 22:40:17 -- common/autotest_common.sh@10 -- # set +x 00:13:32.919 22:40:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.919 22:40:17 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:34.835 22:40:19 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:34.835 22:40:19 -- common/autotest_common.sh@1177 -- # local i=0 00:13:34.835 22:40:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.835 22:40:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:34.835 22:40:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:36.757 22:40:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:36.757 22:40:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:36.757 22:40:21 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.757 22:40:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:36.757 22:40:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.757 22:40:21 -- common/autotest_common.sh@1187 -- # return 0 00:13:36.757 22:40:21 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:36.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.757 22:40:21 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:36.757 22:40:21 -- common/autotest_common.sh@1198 -- # local i=0 00:13:36.757 22:40:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:36.757 22:40:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.757 22:40:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:36.757 22:40:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.757 22:40:21 -- common/autotest_common.sh@1210 -- # return 0 00:13:36.757 22:40:21 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:36.757 22:40:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.757 22:40:21 -- common/autotest_common.sh@10 -- # set +x 00:13:36.757 22:40:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.757 22:40:21 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.757 22:40:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.757 22:40:21 -- common/autotest_common.sh@10 -- # set +x 00:13:36.757 22:40:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.757 22:40:21 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:36.757 22:40:21 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:36.757 22:40:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.757 22:40:21 -- common/autotest_common.sh@10 -- # set +x 00:13:36.757 22:40:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.757 22:40:21 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.757 22:40:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.757 22:40:21 -- common/autotest_common.sh@10 -- # set +x 00:13:36.757 [2024-04-15 22:40:21.377159] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.757 22:40:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.757 22:40:21 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:36.757 22:40:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.757 22:40:21 -- common/autotest_common.sh@10 -- # set +x 00:13:36.757 22:40:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.757 22:40:21 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:36.757 22:40:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.757 22:40:21 -- common/autotest_common.sh@10 -- # set +x 00:13:36.757 22:40:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.757 22:40:21 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:38.199 22:40:22 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:38.199 22:40:22 -- common/autotest_common.sh@1177 -- # local i=0 00:13:38.199 22:40:22 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.199 22:40:22 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:38.199 22:40:22 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:40.116 22:40:24 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:40.116 22:40:24 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:40.116 22:40:24 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.116 22:40:24 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:40.116 22:40:24 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.116 22:40:24 -- common/autotest_common.sh@1187 -- # return 0 00:13:40.116 22:40:24 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:40.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.377 22:40:25 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:40.377 22:40:25 -- common/autotest_common.sh@1198 -- # local i=0 00:13:40.377 22:40:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:40.377 22:40:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.377 22:40:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:40.377 22:40:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.377 22:40:25 -- common/autotest_common.sh@1210 -- # return 0 00:13:40.377 22:40:25 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.377 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.377 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.377 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.377 22:40:25 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.377 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.377 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.377 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.377 22:40:25 -- target/rpc.sh@99 -- # seq 1 5 00:13:40.377 22:40:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:40.377 22:40:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:40.377 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.377 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.377 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.377 22:40:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.377 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.377 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.377 [2024-04-15 22:40:25.089132] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.377 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.377 22:40:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:40.377 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.377 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.377 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.377 22:40:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:40.377 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.377 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.377 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.377 22:40:25 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.377 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.377 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.377 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.377 22:40:25 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.377 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.377 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.377 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.377 22:40:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:40.377 22:40:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:40.377 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.377 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.377 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.377 22:40:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.377 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.377 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.377 [2024-04-15 22:40:25.145264] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.377 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.377 22:40:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:40.377 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.377 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.377 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.377 22:40:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:40.377 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.377 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.377 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.377 22:40:25 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.377 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.377 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.377 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.377 22:40:25 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.377 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.378 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.639 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.639 22:40:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:40.639 22:40:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:40.640 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.640 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.640 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.640 22:40:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.640 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.640 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.640 [2024-04-15 22:40:25.205460] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.640 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.640 22:40:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:40.640 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.640 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.640 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.640 22:40:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:40.640 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.640 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.640 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.640 22:40:25 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.640 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.640 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.640 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.640 22:40:25 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.640 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.640 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.640 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.640 22:40:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:40.640 22:40:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:40.640 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.640 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.640 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.640 22:40:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.640 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.640 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.640 [2024-04-15 22:40:25.261627] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.640 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.640 22:40:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:40.640 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.640 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.640 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.640 22:40:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:40.640 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.640 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.640 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.640 22:40:25 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.640 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.640 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.640 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.640 22:40:25 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.640 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.640 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.640 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.640 22:40:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:40.640 22:40:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:40.640 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.640 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.640 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.640 22:40:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.641 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.641 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.641 [2024-04-15 22:40:25.317840] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.641 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.641 22:40:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:40.641 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.641 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.641 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.641 22:40:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:40.641 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.641 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.641 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.641 22:40:25 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.641 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.641 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.641 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.641 22:40:25 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.641 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.641 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.641 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.641 22:40:25 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:40.641 22:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.641 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:40.641 22:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.641 22:40:25 -- target/rpc.sh@110 -- # stats='{ 00:13:40.641 "tick_rate": 2400000000, 00:13:40.641 "poll_groups": [ 00:13:40.641 { 00:13:40.641 "name": "nvmf_tgt_poll_group_0", 00:13:40.641 "admin_qpairs": 0, 00:13:40.641 "io_qpairs": 224, 00:13:40.641 "current_admin_qpairs": 0, 00:13:40.641 "current_io_qpairs": 0, 00:13:40.641 "pending_bdev_io": 0, 00:13:40.641 "completed_nvme_io": 357, 00:13:40.641 "transports": [ 00:13:40.641 { 00:13:40.641 "trtype": "TCP" 00:13:40.641 } 00:13:40.641 ] 00:13:40.641 }, 00:13:40.641 { 00:13:40.641 "name": "nvmf_tgt_poll_group_1", 00:13:40.641 "admin_qpairs": 1, 00:13:40.641 "io_qpairs": 223, 00:13:40.641 "current_admin_qpairs": 0, 00:13:40.641 "current_io_qpairs": 0, 00:13:40.641 "pending_bdev_io": 0, 00:13:40.641 "completed_nvme_io": 229, 00:13:40.641 "transports": [ 00:13:40.641 { 00:13:40.641 "trtype": "TCP" 00:13:40.641 } 00:13:40.641 ] 00:13:40.641 }, 00:13:40.641 { 00:13:40.641 "name": "nvmf_tgt_poll_group_2", 00:13:40.641 "admin_qpairs": 6, 00:13:40.641 "io_qpairs": 218, 00:13:40.641 "current_admin_qpairs": 0, 00:13:40.641 "current_io_qpairs": 0, 00:13:40.641 "pending_bdev_io": 0, 00:13:40.641 "completed_nvme_io": 247, 00:13:40.641 "transports": [ 00:13:40.641 { 00:13:40.641 "trtype": "TCP" 00:13:40.641 } 00:13:40.641 ] 00:13:40.641 }, 00:13:40.641 { 00:13:40.641 "name": "nvmf_tgt_poll_group_3", 00:13:40.641 "admin_qpairs": 0, 00:13:40.641 "io_qpairs": 224, 00:13:40.641 "current_admin_qpairs": 0, 00:13:40.641 "current_io_qpairs": 0, 00:13:40.641 "pending_bdev_io": 0, 00:13:40.641 "completed_nvme_io": 406, 00:13:40.641 "transports": [ 00:13:40.641 { 00:13:40.641 "trtype": "TCP" 00:13:40.641 } 00:13:40.641 ] 00:13:40.641 } 00:13:40.641 ] 00:13:40.641 }' 00:13:40.641 22:40:25 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:40.641 22:40:25 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:40.642 22:40:25 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:40.642 22:40:25 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:40.642 22:40:25 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:40.642 22:40:25 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:40.642 22:40:25 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:40.642 22:40:25 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:40.642 22:40:25 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:40.905 22:40:25 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:40.905 22:40:25 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:40.905 22:40:25 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:40.905 22:40:25 -- target/rpc.sh@123 -- # nvmftestfini 00:13:40.905 22:40:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:40.905 22:40:25 -- nvmf/common.sh@116 -- # sync 00:13:40.905 22:40:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:40.905 22:40:25 -- nvmf/common.sh@119 -- # set +e 00:13:40.905 22:40:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:40.905 22:40:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:40.905 rmmod nvme_tcp 00:13:40.905 rmmod nvme_fabrics 00:13:40.905 rmmod nvme_keyring 00:13:40.905 22:40:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:40.905 22:40:25 -- nvmf/common.sh@123 -- # set -e 00:13:40.905 22:40:25 -- nvmf/common.sh@124 -- # return 0 00:13:40.905 22:40:25 -- nvmf/common.sh@477 -- # '[' -n 1008325 ']' 00:13:40.905 22:40:25 -- nvmf/common.sh@478 -- # killprocess 1008325 00:13:40.905 22:40:25 -- common/autotest_common.sh@926 -- # '[' -z 1008325 ']' 00:13:40.905 22:40:25 -- common/autotest_common.sh@930 -- # kill -0 1008325 00:13:40.905 22:40:25 -- common/autotest_common.sh@931 -- # uname 00:13:40.905 22:40:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:40.905 22:40:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1008325 00:13:40.905 22:40:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:40.905 22:40:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:40.905 22:40:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1008325' 00:13:40.905 killing process with pid 1008325 00:13:40.905 22:40:25 -- common/autotest_common.sh@945 -- # kill 1008325 00:13:40.905 22:40:25 -- common/autotest_common.sh@950 -- # wait 1008325 00:13:41.166 22:40:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:41.166 22:40:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:41.166 22:40:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:41.166 22:40:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:41.166 22:40:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:41.166 22:40:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.166 22:40:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.166 22:40:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.079 22:40:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:43.079 00:13:43.079 real 0m37.891s 00:13:43.079 user 1m52.945s 00:13:43.079 sys 0m7.686s 00:13:43.079 22:40:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.079 22:40:27 -- common/autotest_common.sh@10 -- # set +x 00:13:43.079 ************************************ 00:13:43.079 END TEST nvmf_rpc 00:13:43.079 ************************************ 00:13:43.079 22:40:27 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:43.079 22:40:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:43.079 22:40:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:43.079 22:40:27 -- common/autotest_common.sh@10 -- # set +x 00:13:43.079 ************************************ 00:13:43.079 START TEST nvmf_invalid 00:13:43.079 ************************************ 00:13:43.079 22:40:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:43.341 * Looking for test storage... 00:13:43.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.341 22:40:27 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.341 22:40:27 -- nvmf/common.sh@7 -- # uname -s 00:13:43.341 22:40:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.341 22:40:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.341 22:40:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.341 22:40:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.341 22:40:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.341 22:40:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.341 22:40:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.341 22:40:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.341 22:40:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.341 22:40:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.341 22:40:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:43.341 22:40:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:43.341 22:40:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.341 22:40:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.341 22:40:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.341 22:40:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.341 22:40:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.341 22:40:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.341 22:40:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.341 22:40:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.341 22:40:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.341 22:40:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.341 22:40:27 -- paths/export.sh@5 -- # export PATH 00:13:43.341 22:40:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.341 22:40:27 -- nvmf/common.sh@46 -- # : 0 00:13:43.341 22:40:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:43.341 22:40:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:43.341 22:40:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:43.341 22:40:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.341 22:40:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.341 22:40:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:43.341 22:40:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:43.341 22:40:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:43.341 22:40:27 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:43.341 22:40:27 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:43.341 22:40:27 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:43.341 22:40:27 -- target/invalid.sh@14 -- # target=foobar 00:13:43.341 22:40:27 -- target/invalid.sh@16 -- # RANDOM=0 00:13:43.341 22:40:27 -- target/invalid.sh@34 -- # nvmftestinit 00:13:43.341 22:40:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:43.341 22:40:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.341 22:40:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:43.341 22:40:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:43.341 22:40:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:43.342 22:40:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.342 22:40:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.342 22:40:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.342 22:40:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:43.342 22:40:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:43.342 22:40:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:43.342 22:40:27 -- common/autotest_common.sh@10 -- # set +x 00:13:51.494 22:40:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:51.494 22:40:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:51.494 22:40:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:51.494 22:40:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:51.494 22:40:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:51.494 22:40:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:51.494 22:40:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:51.494 22:40:35 -- nvmf/common.sh@294 -- # net_devs=() 00:13:51.494 22:40:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:51.494 22:40:35 -- nvmf/common.sh@295 -- # e810=() 00:13:51.494 22:40:35 -- nvmf/common.sh@295 -- # local -ga e810 00:13:51.494 22:40:35 -- nvmf/common.sh@296 -- # x722=() 00:13:51.494 22:40:35 -- nvmf/common.sh@296 -- # local -ga x722 00:13:51.494 22:40:35 -- nvmf/common.sh@297 -- # mlx=() 00:13:51.494 22:40:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:51.494 22:40:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.494 22:40:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.494 22:40:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.494 22:40:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.494 22:40:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.494 22:40:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.494 22:40:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.494 22:40:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.494 22:40:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.494 22:40:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.494 22:40:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.494 22:40:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:51.494 22:40:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:51.494 22:40:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:51.494 22:40:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:51.494 22:40:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:51.494 22:40:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:51.494 22:40:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:51.494 22:40:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:51.494 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:51.494 22:40:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:51.495 22:40:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:51.495 22:40:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.495 22:40:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.495 22:40:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:51.495 22:40:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:51.495 22:40:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:51.495 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:51.495 22:40:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:51.495 22:40:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:51.495 22:40:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.495 22:40:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.495 22:40:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:51.495 22:40:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:51.495 22:40:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:51.495 22:40:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:51.495 22:40:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:51.495 22:40:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.495 22:40:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:51.495 22:40:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.495 22:40:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:51.495 Found net devices under 0000:31:00.0: cvl_0_0 00:13:51.495 22:40:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.495 22:40:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:51.495 22:40:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.495 22:40:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:51.495 22:40:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.495 22:40:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:51.495 Found net devices under 0000:31:00.1: cvl_0_1 00:13:51.495 22:40:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.495 22:40:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:51.495 22:40:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:51.495 22:40:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:51.495 22:40:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:51.495 22:40:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:51.495 22:40:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.495 22:40:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.495 22:40:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:51.495 22:40:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:51.495 22:40:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:51.495 22:40:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:51.495 22:40:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:51.495 22:40:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:51.495 22:40:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.495 22:40:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:51.495 22:40:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:51.495 22:40:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:51.495 22:40:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:51.495 22:40:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:51.495 22:40:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:51.495 22:40:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:51.495 22:40:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:51.495 22:40:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:51.495 22:40:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:51.495 22:40:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:51.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:13:51.495 00:13:51.495 --- 10.0.0.2 ping statistics --- 00:13:51.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.495 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:13:51.495 22:40:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:51.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:13:51.495 00:13:51.495 --- 10.0.0.1 ping statistics --- 00:13:51.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.495 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:13:51.495 22:40:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.495 22:40:35 -- nvmf/common.sh@410 -- # return 0 00:13:51.495 22:40:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:51.495 22:40:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.495 22:40:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:51.495 22:40:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:51.495 22:40:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.495 22:40:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:51.495 22:40:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:51.495 22:40:35 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:51.495 22:40:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:51.495 22:40:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:51.495 22:40:35 -- common/autotest_common.sh@10 -- # set +x 00:13:51.495 22:40:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:51.495 22:40:35 -- nvmf/common.sh@469 -- # nvmfpid=1018767 00:13:51.495 22:40:35 -- nvmf/common.sh@470 -- # waitforlisten 1018767 00:13:51.495 22:40:35 -- common/autotest_common.sh@819 -- # '[' -z 1018767 ']' 00:13:51.495 22:40:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.495 22:40:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:51.495 22:40:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.495 22:40:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:51.495 22:40:35 -- common/autotest_common.sh@10 -- # set +x 00:13:51.495 [2024-04-15 22:40:35.998306] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:13:51.495 [2024-04-15 22:40:35.998352] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.495 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.495 [2024-04-15 22:40:36.064895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:51.495 [2024-04-15 22:40:36.129955] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:51.495 [2024-04-15 22:40:36.130090] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.495 [2024-04-15 22:40:36.130099] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.495 [2024-04-15 22:40:36.130107] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.495 [2024-04-15 22:40:36.130227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.495 [2024-04-15 22:40:36.130349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.495 [2024-04-15 22:40:36.130511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.495 [2024-04-15 22:40:36.130512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.066 22:40:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:52.066 22:40:36 -- common/autotest_common.sh@852 -- # return 0 00:13:52.066 22:40:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:52.066 22:40:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:52.066 22:40:36 -- common/autotest_common.sh@10 -- # set +x 00:13:52.066 22:40:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.066 22:40:36 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:52.066 22:40:36 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10169 00:13:52.327 [2024-04-15 22:40:36.979221] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:52.327 22:40:37 -- target/invalid.sh@40 -- # out='request: 00:13:52.327 { 00:13:52.327 "nqn": "nqn.2016-06.io.spdk:cnode10169", 00:13:52.327 "tgt_name": "foobar", 00:13:52.327 "method": "nvmf_create_subsystem", 00:13:52.327 "req_id": 1 00:13:52.327 } 00:13:52.327 Got JSON-RPC error response 00:13:52.327 response: 00:13:52.327 { 00:13:52.327 "code": -32603, 00:13:52.327 "message": "Unable to find target foobar" 00:13:52.327 }' 00:13:52.327 22:40:37 -- target/invalid.sh@41 -- # [[ request: 00:13:52.327 { 00:13:52.327 "nqn": "nqn.2016-06.io.spdk:cnode10169", 00:13:52.327 "tgt_name": "foobar", 00:13:52.327 "method": "nvmf_create_subsystem", 00:13:52.327 "req_id": 1 00:13:52.327 } 00:13:52.327 Got JSON-RPC error response 00:13:52.327 response: 00:13:52.327 { 00:13:52.327 "code": -32603, 00:13:52.327 "message": "Unable to find target foobar" 00:13:52.327 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:52.327 22:40:37 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:52.327 22:40:37 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18078 00:13:52.588 [2024-04-15 22:40:37.143796] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18078: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:52.588 22:40:37 -- target/invalid.sh@45 -- # out='request: 00:13:52.588 { 00:13:52.588 "nqn": "nqn.2016-06.io.spdk:cnode18078", 00:13:52.588 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:52.588 "method": "nvmf_create_subsystem", 00:13:52.588 "req_id": 1 00:13:52.588 } 00:13:52.588 Got JSON-RPC error response 00:13:52.588 response: 00:13:52.588 { 00:13:52.588 "code": -32602, 00:13:52.588 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:52.588 }' 00:13:52.588 22:40:37 -- target/invalid.sh@46 -- # [[ request: 00:13:52.588 { 00:13:52.588 "nqn": "nqn.2016-06.io.spdk:cnode18078", 00:13:52.588 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:52.588 "method": "nvmf_create_subsystem", 00:13:52.588 "req_id": 1 00:13:52.588 } 00:13:52.588 Got JSON-RPC error response 00:13:52.588 response: 00:13:52.588 { 00:13:52.588 "code": -32602, 00:13:52.588 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:52.588 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:52.588 22:40:37 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:52.588 22:40:37 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24024 00:13:52.588 [2024-04-15 22:40:37.316314] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24024: invalid model number 'SPDK_Controller' 00:13:52.588 22:40:37 -- target/invalid.sh@50 -- # out='request: 00:13:52.588 { 00:13:52.588 "nqn": "nqn.2016-06.io.spdk:cnode24024", 00:13:52.588 "model_number": "SPDK_Controller\u001f", 00:13:52.588 "method": "nvmf_create_subsystem", 00:13:52.588 "req_id": 1 00:13:52.588 } 00:13:52.588 Got JSON-RPC error response 00:13:52.588 response: 00:13:52.588 { 00:13:52.588 "code": -32602, 00:13:52.588 "message": "Invalid MN SPDK_Controller\u001f" 00:13:52.588 }' 00:13:52.588 22:40:37 -- target/invalid.sh@51 -- # [[ request: 00:13:52.588 { 00:13:52.588 "nqn": "nqn.2016-06.io.spdk:cnode24024", 00:13:52.588 "model_number": "SPDK_Controller\u001f", 00:13:52.588 "method": "nvmf_create_subsystem", 00:13:52.588 "req_id": 1 00:13:52.588 } 00:13:52.588 Got JSON-RPC error response 00:13:52.588 response: 00:13:52.588 { 00:13:52.588 "code": -32602, 00:13:52.588 "message": "Invalid MN SPDK_Controller\u001f" 00:13:52.588 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:52.588 22:40:37 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:52.588 22:40:37 -- target/invalid.sh@19 -- # local length=21 ll 00:13:52.588 22:40:37 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:52.588 22:40:37 -- target/invalid.sh@21 -- # local chars 00:13:52.588 22:40:37 -- target/invalid.sh@22 -- # local string 00:13:52.588 22:40:37 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:52.588 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.588 22:40:37 -- target/invalid.sh@25 -- # printf %x 65 00:13:52.588 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:52.588 22:40:37 -- target/invalid.sh@25 -- # string+=A 00:13:52.588 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.588 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.588 22:40:37 -- target/invalid.sh@25 -- # printf %x 63 00:13:52.588 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:52.588 22:40:37 -- target/invalid.sh@25 -- # string+='?' 00:13:52.588 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.588 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.588 22:40:37 -- target/invalid.sh@25 -- # printf %x 116 00:13:52.588 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:52.588 22:40:37 -- target/invalid.sh@25 -- # string+=t 00:13:52.588 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.588 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.588 22:40:37 -- target/invalid.sh@25 -- # printf %x 92 00:13:52.588 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:52.588 22:40:37 -- target/invalid.sh@25 -- # string+='\' 00:13:52.588 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.588 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.588 22:40:37 -- target/invalid.sh@25 -- # printf %x 33 00:13:52.588 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:52.588 22:40:37 -- target/invalid.sh@25 -- # string+='!' 00:13:52.588 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.588 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.588 22:40:37 -- target/invalid.sh@25 -- # printf %x 122 00:13:52.588 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:52.588 22:40:37 -- target/invalid.sh@25 -- # string+=z 00:13:52.588 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.588 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # printf %x 52 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # string+=4 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # printf %x 111 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # string+=o 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # printf %x 84 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # string+=T 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # printf %x 124 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # string+='|' 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # printf %x 73 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # string+=I 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # printf %x 37 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # string+=% 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # printf %x 76 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # string+=L 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # printf %x 126 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # string+='~' 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # printf %x 109 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # string+=m 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # printf %x 38 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # string+='&' 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # printf %x 70 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # string+=F 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # printf %x 120 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # string+=x 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # printf %x 118 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # string+=v 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # printf %x 98 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # string+=b 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # printf %x 94 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:52.849 22:40:37 -- target/invalid.sh@25 -- # string+='^' 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.849 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.849 22:40:37 -- target/invalid.sh@28 -- # [[ A == \- ]] 00:13:52.849 22:40:37 -- target/invalid.sh@31 -- # echo 'A?t\!z4oT|I%L~m&Fxvb^' 00:13:52.849 22:40:37 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'A?t\!z4oT|I%L~m&Fxvb^' nqn.2016-06.io.spdk:cnode29760 00:13:52.849 [2024-04-15 22:40:37.649386] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29760: invalid serial number 'A?t\!z4oT|I%L~m&Fxvb^' 00:13:53.110 22:40:37 -- target/invalid.sh@54 -- # out='request: 00:13:53.110 { 00:13:53.110 "nqn": "nqn.2016-06.io.spdk:cnode29760", 00:13:53.110 "serial_number": "A?t\\!z4oT|I%L~m&Fxvb^", 00:13:53.110 "method": "nvmf_create_subsystem", 00:13:53.110 "req_id": 1 00:13:53.110 } 00:13:53.110 Got JSON-RPC error response 00:13:53.110 response: 00:13:53.110 { 00:13:53.110 "code": -32602, 00:13:53.110 "message": "Invalid SN A?t\\!z4oT|I%L~m&Fxvb^" 00:13:53.110 }' 00:13:53.110 22:40:37 -- target/invalid.sh@55 -- # [[ request: 00:13:53.110 { 00:13:53.110 "nqn": "nqn.2016-06.io.spdk:cnode29760", 00:13:53.110 "serial_number": "A?t\\!z4oT|I%L~m&Fxvb^", 00:13:53.110 "method": "nvmf_create_subsystem", 00:13:53.110 "req_id": 1 00:13:53.110 } 00:13:53.110 Got JSON-RPC error response 00:13:53.110 response: 00:13:53.110 { 00:13:53.110 "code": -32602, 00:13:53.110 "message": "Invalid SN A?t\\!z4oT|I%L~m&Fxvb^" 00:13:53.110 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:53.110 22:40:37 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:53.110 22:40:37 -- target/invalid.sh@19 -- # local length=41 ll 00:13:53.110 22:40:37 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:53.110 22:40:37 -- target/invalid.sh@21 -- # local chars 00:13:53.110 22:40:37 -- target/invalid.sh@22 -- # local string 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 115 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=s 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 41 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=')' 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 119 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=w 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 113 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=q 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 117 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=u 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 51 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=3 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 61 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+== 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 46 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=. 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 63 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+='?' 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 83 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=S 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 62 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+='>' 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 74 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=J 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 75 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=K 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 87 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=W 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 37 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=% 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 47 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=/ 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 113 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=q 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 44 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=, 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 33 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+='!' 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 51 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=3 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 84 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=T 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 67 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=C 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 47 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=/ 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 79 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=O 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 59 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=';' 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 51 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=3 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 43 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=+ 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 52 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=4 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 72 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=H 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 79 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # string+=O 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.111 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.111 22:40:37 -- target/invalid.sh@25 -- # printf %x 51 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # string+=3 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # printf %x 116 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # string+=t 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # printf %x 34 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # string+='"' 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # printf %x 51 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # string+=3 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # printf %x 74 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # string+=J 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # printf %x 63 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # string+='?' 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # printf %x 47 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # string+=/ 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # printf %x 37 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # string+=% 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # printf %x 70 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # string+=F 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # printf %x 99 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # string+=c 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # printf %x 65 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:53.373 22:40:37 -- target/invalid.sh@25 -- # string+=A 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:53.373 22:40:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:53.373 22:40:37 -- target/invalid.sh@28 -- # [[ s == \- ]] 00:13:53.373 22:40:37 -- target/invalid.sh@31 -- # echo 's)wqu3=.?S>JKW%/q,!3TC/O;3+4HO3t"3J?/%FcA' 00:13:53.373 22:40:37 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 's)wqu3=.?S>JKW%/q,!3TC/O;3+4HO3t"3J?/%FcA' nqn.2016-06.io.spdk:cnode12496 00:13:53.373 [2024-04-15 22:40:38.134971] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12496: invalid model number 's)wqu3=.?S>JKW%/q,!3TC/O;3+4HO3t"3J?/%FcA' 00:13:53.373 22:40:38 -- target/invalid.sh@58 -- # out='request: 00:13:53.373 { 00:13:53.373 "nqn": "nqn.2016-06.io.spdk:cnode12496", 00:13:53.373 "model_number": "s)wqu3=.?S>JKW%/q,!3TC/O;3+4HO3t\"3J?/%FcA", 00:13:53.373 "method": "nvmf_create_subsystem", 00:13:53.373 "req_id": 1 00:13:53.373 } 00:13:53.373 Got JSON-RPC error response 00:13:53.373 response: 00:13:53.373 { 00:13:53.373 "code": -32602, 00:13:53.373 "message": "Invalid MN s)wqu3=.?S>JKW%/q,!3TC/O;3+4HO3t\"3J?/%FcA" 00:13:53.373 }' 00:13:53.373 22:40:38 -- target/invalid.sh@59 -- # [[ request: 00:13:53.373 { 00:13:53.373 "nqn": "nqn.2016-06.io.spdk:cnode12496", 00:13:53.373 "model_number": "s)wqu3=.?S>JKW%/q,!3TC/O;3+4HO3t\"3J?/%FcA", 00:13:53.373 "method": "nvmf_create_subsystem", 00:13:53.373 "req_id": 1 00:13:53.373 } 00:13:53.373 Got JSON-RPC error response 00:13:53.373 response: 00:13:53.373 { 00:13:53.373 "code": -32602, 00:13:53.373 "message": "Invalid MN s)wqu3=.?S>JKW%/q,!3TC/O;3+4HO3t\"3J?/%FcA" 00:13:53.373 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:53.373 22:40:38 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:53.634 [2024-04-15 22:40:38.303595] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.634 22:40:38 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:53.903 22:40:38 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:53.903 22:40:38 -- target/invalid.sh@67 -- # echo '' 00:13:53.903 22:40:38 -- target/invalid.sh@67 -- # head -n 1 00:13:53.903 22:40:38 -- target/invalid.sh@67 -- # IP= 00:13:53.903 22:40:38 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:53.903 [2024-04-15 22:40:38.648765] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:53.903 22:40:38 -- target/invalid.sh@69 -- # out='request: 00:13:53.903 { 00:13:53.903 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:53.903 "listen_address": { 00:13:53.903 "trtype": "tcp", 00:13:53.903 "traddr": "", 00:13:53.903 "trsvcid": "4421" 00:13:53.903 }, 00:13:53.903 "method": "nvmf_subsystem_remove_listener", 00:13:53.903 "req_id": 1 00:13:53.903 } 00:13:53.903 Got JSON-RPC error response 00:13:53.903 response: 00:13:53.903 { 00:13:53.903 "code": -32602, 00:13:53.903 "message": "Invalid parameters" 00:13:53.903 }' 00:13:53.903 22:40:38 -- target/invalid.sh@70 -- # [[ request: 00:13:53.903 { 00:13:53.903 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:53.903 "listen_address": { 00:13:53.903 "trtype": "tcp", 00:13:53.903 "traddr": "", 00:13:53.903 "trsvcid": "4421" 00:13:53.903 }, 00:13:53.903 "method": "nvmf_subsystem_remove_listener", 00:13:53.903 "req_id": 1 00:13:53.903 } 00:13:53.903 Got JSON-RPC error response 00:13:53.903 response: 00:13:53.903 { 00:13:53.903 "code": -32602, 00:13:53.903 "message": "Invalid parameters" 00:13:53.903 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:53.903 22:40:38 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9436 -i 0 00:13:54.162 [2024-04-15 22:40:38.817314] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9436: invalid cntlid range [0-65519] 00:13:54.162 22:40:38 -- target/invalid.sh@73 -- # out='request: 00:13:54.162 { 00:13:54.162 "nqn": "nqn.2016-06.io.spdk:cnode9436", 00:13:54.162 "min_cntlid": 0, 00:13:54.162 "method": "nvmf_create_subsystem", 00:13:54.162 "req_id": 1 00:13:54.162 } 00:13:54.162 Got JSON-RPC error response 00:13:54.162 response: 00:13:54.162 { 00:13:54.162 "code": -32602, 00:13:54.162 "message": "Invalid cntlid range [0-65519]" 00:13:54.162 }' 00:13:54.162 22:40:38 -- target/invalid.sh@74 -- # [[ request: 00:13:54.162 { 00:13:54.162 "nqn": "nqn.2016-06.io.spdk:cnode9436", 00:13:54.162 "min_cntlid": 0, 00:13:54.162 "method": "nvmf_create_subsystem", 00:13:54.162 "req_id": 1 00:13:54.162 } 00:13:54.162 Got JSON-RPC error response 00:13:54.162 response: 00:13:54.162 { 00:13:54.162 "code": -32602, 00:13:54.162 "message": "Invalid cntlid range [0-65519]" 00:13:54.162 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:54.162 22:40:38 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9243 -i 65520 00:13:54.423 [2024-04-15 22:40:38.989889] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9243: invalid cntlid range [65520-65519] 00:13:54.423 22:40:39 -- target/invalid.sh@75 -- # out='request: 00:13:54.423 { 00:13:54.423 "nqn": "nqn.2016-06.io.spdk:cnode9243", 00:13:54.423 "min_cntlid": 65520, 00:13:54.423 "method": "nvmf_create_subsystem", 00:13:54.423 "req_id": 1 00:13:54.423 } 00:13:54.423 Got JSON-RPC error response 00:13:54.423 response: 00:13:54.423 { 00:13:54.423 "code": -32602, 00:13:54.423 "message": "Invalid cntlid range [65520-65519]" 00:13:54.423 }' 00:13:54.423 22:40:39 -- target/invalid.sh@76 -- # [[ request: 00:13:54.423 { 00:13:54.423 "nqn": "nqn.2016-06.io.spdk:cnode9243", 00:13:54.423 "min_cntlid": 65520, 00:13:54.423 "method": "nvmf_create_subsystem", 00:13:54.423 "req_id": 1 00:13:54.423 } 00:13:54.423 Got JSON-RPC error response 00:13:54.423 response: 00:13:54.423 { 00:13:54.423 "code": -32602, 00:13:54.423 "message": "Invalid cntlid range [65520-65519]" 00:13:54.423 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:54.423 22:40:39 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9069 -I 0 00:13:54.423 [2024-04-15 22:40:39.162496] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9069: invalid cntlid range [1-0] 00:13:54.423 22:40:39 -- target/invalid.sh@77 -- # out='request: 00:13:54.423 { 00:13:54.423 "nqn": "nqn.2016-06.io.spdk:cnode9069", 00:13:54.423 "max_cntlid": 0, 00:13:54.423 "method": "nvmf_create_subsystem", 00:13:54.423 "req_id": 1 00:13:54.423 } 00:13:54.423 Got JSON-RPC error response 00:13:54.423 response: 00:13:54.423 { 00:13:54.423 "code": -32602, 00:13:54.423 "message": "Invalid cntlid range [1-0]" 00:13:54.423 }' 00:13:54.423 22:40:39 -- target/invalid.sh@78 -- # [[ request: 00:13:54.423 { 00:13:54.423 "nqn": "nqn.2016-06.io.spdk:cnode9069", 00:13:54.423 "max_cntlid": 0, 00:13:54.423 "method": "nvmf_create_subsystem", 00:13:54.423 "req_id": 1 00:13:54.423 } 00:13:54.423 Got JSON-RPC error response 00:13:54.423 response: 00:13:54.423 { 00:13:54.423 "code": -32602, 00:13:54.423 "message": "Invalid cntlid range [1-0]" 00:13:54.423 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:54.423 22:40:39 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15465 -I 65520 00:13:54.684 [2024-04-15 22:40:39.335021] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15465: invalid cntlid range [1-65520] 00:13:54.684 22:40:39 -- target/invalid.sh@79 -- # out='request: 00:13:54.684 { 00:13:54.684 "nqn": "nqn.2016-06.io.spdk:cnode15465", 00:13:54.684 "max_cntlid": 65520, 00:13:54.684 "method": "nvmf_create_subsystem", 00:13:54.684 "req_id": 1 00:13:54.684 } 00:13:54.684 Got JSON-RPC error response 00:13:54.684 response: 00:13:54.684 { 00:13:54.684 "code": -32602, 00:13:54.684 "message": "Invalid cntlid range [1-65520]" 00:13:54.684 }' 00:13:54.684 22:40:39 -- target/invalid.sh@80 -- # [[ request: 00:13:54.684 { 00:13:54.684 "nqn": "nqn.2016-06.io.spdk:cnode15465", 00:13:54.684 "max_cntlid": 65520, 00:13:54.684 "method": "nvmf_create_subsystem", 00:13:54.684 "req_id": 1 00:13:54.684 } 00:13:54.684 Got JSON-RPC error response 00:13:54.684 response: 00:13:54.684 { 00:13:54.684 "code": -32602, 00:13:54.684 "message": "Invalid cntlid range [1-65520]" 00:13:54.684 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:54.684 22:40:39 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24498 -i 6 -I 5 00:13:54.945 [2024-04-15 22:40:39.507625] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24498: invalid cntlid range [6-5] 00:13:54.945 22:40:39 -- target/invalid.sh@83 -- # out='request: 00:13:54.945 { 00:13:54.945 "nqn": "nqn.2016-06.io.spdk:cnode24498", 00:13:54.945 "min_cntlid": 6, 00:13:54.945 "max_cntlid": 5, 00:13:54.945 "method": "nvmf_create_subsystem", 00:13:54.945 "req_id": 1 00:13:54.945 } 00:13:54.945 Got JSON-RPC error response 00:13:54.945 response: 00:13:54.945 { 00:13:54.945 "code": -32602, 00:13:54.945 "message": "Invalid cntlid range [6-5]" 00:13:54.945 }' 00:13:54.945 22:40:39 -- target/invalid.sh@84 -- # [[ request: 00:13:54.945 { 00:13:54.945 "nqn": "nqn.2016-06.io.spdk:cnode24498", 00:13:54.945 "min_cntlid": 6, 00:13:54.945 "max_cntlid": 5, 00:13:54.945 "method": "nvmf_create_subsystem", 00:13:54.945 "req_id": 1 00:13:54.945 } 00:13:54.945 Got JSON-RPC error response 00:13:54.945 response: 00:13:54.945 { 00:13:54.945 "code": -32602, 00:13:54.945 "message": "Invalid cntlid range [6-5]" 00:13:54.945 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:54.945 22:40:39 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:54.945 22:40:39 -- target/invalid.sh@87 -- # out='request: 00:13:54.945 { 00:13:54.945 "name": "foobar", 00:13:54.945 "method": "nvmf_delete_target", 00:13:54.945 "req_id": 1 00:13:54.945 } 00:13:54.945 Got JSON-RPC error response 00:13:54.945 response: 00:13:54.945 { 00:13:54.945 "code": -32602, 00:13:54.945 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:54.945 }' 00:13:54.945 22:40:39 -- target/invalid.sh@88 -- # [[ request: 00:13:54.945 { 00:13:54.945 "name": "foobar", 00:13:54.945 "method": "nvmf_delete_target", 00:13:54.945 "req_id": 1 00:13:54.945 } 00:13:54.945 Got JSON-RPC error response 00:13:54.945 response: 00:13:54.945 { 00:13:54.945 "code": -32602, 00:13:54.945 "message": "The specified target doesn't exist, cannot delete it." 00:13:54.945 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:54.945 22:40:39 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:54.945 22:40:39 -- target/invalid.sh@91 -- # nvmftestfini 00:13:54.945 22:40:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:54.945 22:40:39 -- nvmf/common.sh@116 -- # sync 00:13:54.945 22:40:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:54.945 22:40:39 -- nvmf/common.sh@119 -- # set +e 00:13:54.945 22:40:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:54.945 22:40:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:54.945 rmmod nvme_tcp 00:13:54.945 rmmod nvme_fabrics 00:13:54.945 rmmod nvme_keyring 00:13:54.945 22:40:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:54.945 22:40:39 -- nvmf/common.sh@123 -- # set -e 00:13:54.945 22:40:39 -- nvmf/common.sh@124 -- # return 0 00:13:54.945 22:40:39 -- nvmf/common.sh@477 -- # '[' -n 1018767 ']' 00:13:54.945 22:40:39 -- nvmf/common.sh@478 -- # killprocess 1018767 00:13:54.945 22:40:39 -- common/autotest_common.sh@926 -- # '[' -z 1018767 ']' 00:13:54.945 22:40:39 -- common/autotest_common.sh@930 -- # kill -0 1018767 00:13:54.945 22:40:39 -- common/autotest_common.sh@931 -- # uname 00:13:54.945 22:40:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:54.945 22:40:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1018767 00:13:54.945 22:40:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:54.945 22:40:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:54.945 22:40:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1018767' 00:13:54.945 killing process with pid 1018767 00:13:54.945 22:40:39 -- common/autotest_common.sh@945 -- # kill 1018767 00:13:54.945 22:40:39 -- common/autotest_common.sh@950 -- # wait 1018767 00:13:55.206 22:40:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:55.206 22:40:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:55.206 22:40:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:55.206 22:40:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:55.206 22:40:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:55.206 22:40:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.206 22:40:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.206 22:40:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.751 22:40:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:57.751 00:13:57.751 real 0m14.116s 00:13:57.751 user 0m19.415s 00:13:57.751 sys 0m6.712s 00:13:57.751 22:40:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:57.751 22:40:41 -- common/autotest_common.sh@10 -- # set +x 00:13:57.751 ************************************ 00:13:57.751 END TEST nvmf_invalid 00:13:57.751 ************************************ 00:13:57.751 22:40:41 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:57.751 22:40:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:57.751 22:40:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:57.751 22:40:41 -- common/autotest_common.sh@10 -- # set +x 00:13:57.751 ************************************ 00:13:57.751 START TEST nvmf_abort 00:13:57.751 ************************************ 00:13:57.751 22:40:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:57.751 * Looking for test storage... 00:13:57.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:57.751 22:40:42 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.751 22:40:42 -- nvmf/common.sh@7 -- # uname -s 00:13:57.751 22:40:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.751 22:40:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.751 22:40:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.751 22:40:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.751 22:40:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.751 22:40:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.751 22:40:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.751 22:40:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.751 22:40:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.751 22:40:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.751 22:40:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:57.751 22:40:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:57.751 22:40:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.751 22:40:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.751 22:40:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.751 22:40:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:57.751 22:40:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.751 22:40:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.751 22:40:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.751 22:40:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.751 22:40:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.751 22:40:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.751 22:40:42 -- paths/export.sh@5 -- # export PATH 00:13:57.751 22:40:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.751 22:40:42 -- nvmf/common.sh@46 -- # : 0 00:13:57.751 22:40:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:57.751 22:40:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:57.751 22:40:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:57.751 22:40:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.751 22:40:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.751 22:40:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:57.751 22:40:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:57.751 22:40:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:57.751 22:40:42 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:57.751 22:40:42 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:57.751 22:40:42 -- target/abort.sh@14 -- # nvmftestinit 00:13:57.751 22:40:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:57.751 22:40:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.751 22:40:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:57.751 22:40:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:57.751 22:40:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:57.751 22:40:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.751 22:40:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.751 22:40:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.751 22:40:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:57.751 22:40:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:57.751 22:40:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:57.751 22:40:42 -- common/autotest_common.sh@10 -- # set +x 00:14:05.901 22:40:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:05.901 22:40:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:05.901 22:40:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:05.901 22:40:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:05.901 22:40:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:05.901 22:40:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:05.901 22:40:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:05.901 22:40:49 -- nvmf/common.sh@294 -- # net_devs=() 00:14:05.901 22:40:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:05.901 22:40:49 -- nvmf/common.sh@295 -- # e810=() 00:14:05.901 22:40:49 -- nvmf/common.sh@295 -- # local -ga e810 00:14:05.901 22:40:49 -- nvmf/common.sh@296 -- # x722=() 00:14:05.901 22:40:49 -- nvmf/common.sh@296 -- # local -ga x722 00:14:05.901 22:40:49 -- nvmf/common.sh@297 -- # mlx=() 00:14:05.901 22:40:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:05.901 22:40:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:05.901 22:40:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:05.901 22:40:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:05.901 22:40:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:05.901 22:40:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:05.901 22:40:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:05.901 22:40:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:05.901 22:40:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:05.901 22:40:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:05.901 22:40:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:05.901 22:40:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:05.901 22:40:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:05.901 22:40:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:05.901 22:40:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:05.901 22:40:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:05.901 22:40:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:05.901 22:40:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:05.901 22:40:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:05.901 22:40:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:05.901 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:05.901 22:40:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:05.901 22:40:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:05.901 22:40:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.901 22:40:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.901 22:40:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:05.901 22:40:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:05.901 22:40:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:05.901 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:05.901 22:40:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:05.901 22:40:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:05.901 22:40:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.901 22:40:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.901 22:40:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:05.901 22:40:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:05.901 22:40:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:05.901 22:40:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:05.901 22:40:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:05.901 22:40:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.902 22:40:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:05.902 22:40:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.902 22:40:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:05.902 Found net devices under 0000:31:00.0: cvl_0_0 00:14:05.902 22:40:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.902 22:40:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:05.902 22:40:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.902 22:40:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:05.902 22:40:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.902 22:40:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:05.902 Found net devices under 0000:31:00.1: cvl_0_1 00:14:05.902 22:40:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.902 22:40:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:05.902 22:40:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:05.902 22:40:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:05.902 22:40:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:05.902 22:40:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:05.902 22:40:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.902 22:40:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.902 22:40:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:05.902 22:40:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:05.902 22:40:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:05.902 22:40:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:05.902 22:40:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:05.902 22:40:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:05.902 22:40:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.902 22:40:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:05.902 22:40:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:05.902 22:40:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:05.902 22:40:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:05.902 22:40:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:05.902 22:40:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:05.902 22:40:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:05.902 22:40:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:05.902 22:40:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:05.902 22:40:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:05.902 22:40:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:05.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:14:05.902 00:14:05.902 --- 10.0.0.2 ping statistics --- 00:14:05.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.902 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:14:05.902 22:40:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:05.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:14:05.902 00:14:05.902 --- 10.0.0.1 ping statistics --- 00:14:05.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.902 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:14:05.902 22:40:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.902 22:40:50 -- nvmf/common.sh@410 -- # return 0 00:14:05.902 22:40:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:05.902 22:40:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.902 22:40:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:05.902 22:40:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:05.902 22:40:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.902 22:40:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:05.902 22:40:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:05.902 22:40:50 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:05.902 22:40:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:05.902 22:40:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:05.902 22:40:50 -- common/autotest_common.sh@10 -- # set +x 00:14:05.902 22:40:50 -- nvmf/common.sh@469 -- # nvmfpid=1024466 00:14:05.902 22:40:50 -- nvmf/common.sh@470 -- # waitforlisten 1024466 00:14:05.902 22:40:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:05.902 22:40:50 -- common/autotest_common.sh@819 -- # '[' -z 1024466 ']' 00:14:05.902 22:40:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.902 22:40:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:05.902 22:40:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.902 22:40:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:05.902 22:40:50 -- common/autotest_common.sh@10 -- # set +x 00:14:05.902 [2024-04-15 22:40:50.201492] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:14:05.902 [2024-04-15 22:40:50.201561] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.902 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.902 [2024-04-15 22:40:50.279113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:05.902 [2024-04-15 22:40:50.350178] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:05.902 [2024-04-15 22:40:50.350305] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.902 [2024-04-15 22:40:50.350313] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.902 [2024-04-15 22:40:50.350320] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.902 [2024-04-15 22:40:50.350452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.902 [2024-04-15 22:40:50.350603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.902 [2024-04-15 22:40:50.350791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.473 22:40:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:06.473 22:40:50 -- common/autotest_common.sh@852 -- # return 0 00:14:06.473 22:40:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:06.473 22:40:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:06.473 22:40:50 -- common/autotest_common.sh@10 -- # set +x 00:14:06.473 22:40:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.473 22:40:51 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:06.473 22:40:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.473 22:40:51 -- common/autotest_common.sh@10 -- # set +x 00:14:06.473 [2024-04-15 22:40:51.026910] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.473 22:40:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.473 22:40:51 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:06.473 22:40:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.473 22:40:51 -- common/autotest_common.sh@10 -- # set +x 00:14:06.473 Malloc0 00:14:06.473 22:40:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.473 22:40:51 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:06.473 22:40:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.473 22:40:51 -- common/autotest_common.sh@10 -- # set +x 00:14:06.473 Delay0 00:14:06.473 22:40:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.473 22:40:51 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:06.473 22:40:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.473 22:40:51 -- common/autotest_common.sh@10 -- # set +x 00:14:06.473 22:40:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.473 22:40:51 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:06.473 22:40:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.473 22:40:51 -- common/autotest_common.sh@10 -- # set +x 00:14:06.473 22:40:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.473 22:40:51 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:06.473 22:40:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.473 22:40:51 -- common/autotest_common.sh@10 -- # set +x 00:14:06.473 [2024-04-15 22:40:51.108058] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.473 22:40:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.473 22:40:51 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:06.473 22:40:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.473 22:40:51 -- common/autotest_common.sh@10 -- # set +x 00:14:06.473 22:40:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.473 22:40:51 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:06.473 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.473 [2024-04-15 22:40:51.259721] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:09.014 Initializing NVMe Controllers 00:14:09.014 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:09.014 controller IO queue size 128 less than required 00:14:09.014 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:09.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:09.014 Initialization complete. Launching workers. 00:14:09.014 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33338 00:14:09.014 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33399, failed to submit 62 00:14:09.014 success 33338, unsuccess 61, failed 0 00:14:09.014 22:40:53 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:09.014 22:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.014 22:40:53 -- common/autotest_common.sh@10 -- # set +x 00:14:09.014 22:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.014 22:40:53 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:09.014 22:40:53 -- target/abort.sh@38 -- # nvmftestfini 00:14:09.014 22:40:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:09.014 22:40:53 -- nvmf/common.sh@116 -- # sync 00:14:09.014 22:40:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:09.014 22:40:53 -- nvmf/common.sh@119 -- # set +e 00:14:09.014 22:40:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:09.014 22:40:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:09.014 rmmod nvme_tcp 00:14:09.014 rmmod nvme_fabrics 00:14:09.014 rmmod nvme_keyring 00:14:09.014 22:40:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:09.014 22:40:53 -- nvmf/common.sh@123 -- # set -e 00:14:09.014 22:40:53 -- nvmf/common.sh@124 -- # return 0 00:14:09.014 22:40:53 -- nvmf/common.sh@477 -- # '[' -n 1024466 ']' 00:14:09.014 22:40:53 -- nvmf/common.sh@478 -- # killprocess 1024466 00:14:09.014 22:40:53 -- common/autotest_common.sh@926 -- # '[' -z 1024466 ']' 00:14:09.014 22:40:53 -- common/autotest_common.sh@930 -- # kill -0 1024466 00:14:09.014 22:40:53 -- common/autotest_common.sh@931 -- # uname 00:14:09.014 22:40:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:09.014 22:40:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1024466 00:14:09.014 22:40:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:09.014 22:40:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:09.014 22:40:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1024466' 00:14:09.014 killing process with pid 1024466 00:14:09.014 22:40:53 -- common/autotest_common.sh@945 -- # kill 1024466 00:14:09.015 22:40:53 -- common/autotest_common.sh@950 -- # wait 1024466 00:14:09.015 22:40:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:09.015 22:40:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:09.015 22:40:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:09.015 22:40:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:09.015 22:40:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:09.015 22:40:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.015 22:40:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.015 22:40:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.988 22:40:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:10.988 00:14:10.988 real 0m13.671s 00:14:10.988 user 0m13.745s 00:14:10.988 sys 0m6.754s 00:14:10.988 22:40:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.988 22:40:55 -- common/autotest_common.sh@10 -- # set +x 00:14:10.988 ************************************ 00:14:10.988 END TEST nvmf_abort 00:14:10.988 ************************************ 00:14:10.988 22:40:55 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:10.989 22:40:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:10.989 22:40:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:10.989 22:40:55 -- common/autotest_common.sh@10 -- # set +x 00:14:10.989 ************************************ 00:14:10.989 START TEST nvmf_ns_hotplug_stress 00:14:10.989 ************************************ 00:14:10.989 22:40:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:10.989 * Looking for test storage... 00:14:10.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:11.249 22:40:55 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.249 22:40:55 -- nvmf/common.sh@7 -- # uname -s 00:14:11.249 22:40:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.249 22:40:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.249 22:40:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.249 22:40:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.249 22:40:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.249 22:40:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.249 22:40:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.249 22:40:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.249 22:40:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.249 22:40:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.249 22:40:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:11.249 22:40:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:11.249 22:40:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.249 22:40:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.249 22:40:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.249 22:40:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.249 22:40:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.249 22:40:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.249 22:40:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.250 22:40:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.250 22:40:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.250 22:40:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.250 22:40:55 -- paths/export.sh@5 -- # export PATH 00:14:11.250 22:40:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.250 22:40:55 -- nvmf/common.sh@46 -- # : 0 00:14:11.250 22:40:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:11.250 22:40:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:11.250 22:40:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:11.250 22:40:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.250 22:40:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.250 22:40:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:11.250 22:40:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:11.250 22:40:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:11.250 22:40:55 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.250 22:40:55 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:14:11.250 22:40:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:11.250 22:40:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.250 22:40:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:11.250 22:40:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:11.250 22:40:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:11.250 22:40:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.250 22:40:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.250 22:40:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.250 22:40:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:11.250 22:40:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:11.250 22:40:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:11.250 22:40:55 -- common/autotest_common.sh@10 -- # set +x 00:14:19.396 22:41:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:19.396 22:41:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:19.396 22:41:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:19.396 22:41:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:19.396 22:41:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:19.396 22:41:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:19.396 22:41:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:19.396 22:41:03 -- nvmf/common.sh@294 -- # net_devs=() 00:14:19.396 22:41:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:19.396 22:41:03 -- nvmf/common.sh@295 -- # e810=() 00:14:19.396 22:41:03 -- nvmf/common.sh@295 -- # local -ga e810 00:14:19.396 22:41:03 -- nvmf/common.sh@296 -- # x722=() 00:14:19.396 22:41:03 -- nvmf/common.sh@296 -- # local -ga x722 00:14:19.396 22:41:03 -- nvmf/common.sh@297 -- # mlx=() 00:14:19.396 22:41:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:19.396 22:41:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:19.396 22:41:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:19.396 22:41:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:19.396 22:41:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:19.396 22:41:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:19.396 22:41:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:19.396 22:41:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:19.396 22:41:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:19.396 22:41:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:19.396 22:41:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:19.396 22:41:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:19.396 22:41:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:19.396 22:41:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:19.396 22:41:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:19.396 22:41:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:19.396 22:41:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:19.396 22:41:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:19.396 22:41:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:19.396 22:41:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:19.396 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:19.396 22:41:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:19.396 22:41:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:19.396 22:41:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.396 22:41:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.396 22:41:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:19.396 22:41:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:19.396 22:41:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:19.396 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:19.396 22:41:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:19.396 22:41:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:19.396 22:41:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.396 22:41:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.396 22:41:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:19.396 22:41:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:19.396 22:41:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:19.396 22:41:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:19.396 22:41:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:19.396 22:41:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.396 22:41:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:19.396 22:41:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.396 22:41:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:19.396 Found net devices under 0000:31:00.0: cvl_0_0 00:14:19.396 22:41:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.396 22:41:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:19.396 22:41:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.396 22:41:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:19.396 22:41:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.396 22:41:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:19.396 Found net devices under 0000:31:00.1: cvl_0_1 00:14:19.396 22:41:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.396 22:41:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:19.396 22:41:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:19.396 22:41:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:19.396 22:41:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:19.396 22:41:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:19.396 22:41:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.396 22:41:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.396 22:41:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:19.397 22:41:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:19.397 22:41:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:19.397 22:41:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:19.397 22:41:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:19.397 22:41:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:19.397 22:41:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.397 22:41:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:19.397 22:41:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:19.397 22:41:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:19.397 22:41:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:19.397 22:41:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:19.397 22:41:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:19.397 22:41:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:19.397 22:41:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:19.397 22:41:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:19.397 22:41:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:19.397 22:41:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:19.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:14:19.397 00:14:19.397 --- 10.0.0.2 ping statistics --- 00:14:19.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.397 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:14:19.397 22:41:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:19.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:14:19.397 00:14:19.397 --- 10.0.0.1 ping statistics --- 00:14:19.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.397 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:14:19.397 22:41:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.397 22:41:03 -- nvmf/common.sh@410 -- # return 0 00:14:19.397 22:41:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:19.397 22:41:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.397 22:41:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:19.397 22:41:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:19.397 22:41:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.397 22:41:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:19.397 22:41:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:19.397 22:41:03 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:14:19.397 22:41:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:19.397 22:41:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:19.397 22:41:03 -- common/autotest_common.sh@10 -- # set +x 00:14:19.397 22:41:03 -- nvmf/common.sh@469 -- # nvmfpid=1029805 00:14:19.397 22:41:03 -- nvmf/common.sh@470 -- # waitforlisten 1029805 00:14:19.397 22:41:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:19.397 22:41:03 -- common/autotest_common.sh@819 -- # '[' -z 1029805 ']' 00:14:19.397 22:41:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.397 22:41:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:19.397 22:41:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.397 22:41:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:19.397 22:41:03 -- common/autotest_common.sh@10 -- # set +x 00:14:19.397 [2024-04-15 22:41:03.834731] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:14:19.397 [2024-04-15 22:41:03.834798] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.397 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.397 [2024-04-15 22:41:03.912548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:19.397 [2024-04-15 22:41:03.983688] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:19.397 [2024-04-15 22:41:03.983809] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.397 [2024-04-15 22:41:03.983818] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.397 [2024-04-15 22:41:03.983825] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.397 [2024-04-15 22:41:03.983946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.397 [2024-04-15 22:41:03.984102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.397 [2024-04-15 22:41:03.984103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:19.968 22:41:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:19.968 22:41:04 -- common/autotest_common.sh@852 -- # return 0 00:14:19.968 22:41:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:19.968 22:41:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:19.968 22:41:04 -- common/autotest_common.sh@10 -- # set +x 00:14:19.968 22:41:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.968 22:41:04 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:14:19.968 22:41:04 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:20.229 [2024-04-15 22:41:04.784259] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.229 22:41:04 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:20.229 22:41:04 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.489 [2024-04-15 22:41:05.113639] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.489 22:41:05 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:20.750 22:41:05 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:20.750 Malloc0 00:14:20.750 22:41:05 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:21.010 Delay0 00:14:21.010 22:41:05 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.010 22:41:05 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:21.271 NULL1 00:14:21.271 22:41:05 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:21.531 22:41:06 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:21.531 22:41:06 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=1030240 00:14:21.531 22:41:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:21.531 22:41:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.531 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.473 Read completed with error (sct=0, sc=11) 00:14:22.473 22:41:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:22.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:22.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:22.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:22.733 22:41:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:14:22.733 22:41:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:22.994 true 00:14:22.994 22:41:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:22.994 22:41:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.934 22:41:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.934 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.934 22:41:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:14:23.934 22:41:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:24.195 true 00:14:24.195 22:41:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:24.195 22:41:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.195 22:41:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.456 22:41:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:14:24.456 22:41:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:24.456 true 00:14:24.716 22:41:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:24.716 22:41:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.716 22:41:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.976 22:41:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:14:24.976 22:41:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:24.976 true 00:14:24.976 22:41:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:24.976 22:41:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.236 22:41:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.497 22:41:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:14:25.497 22:41:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:25.497 true 00:14:25.497 22:41:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:25.497 22:41:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.758 22:41:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.758 22:41:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:14:25.758 22:41:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:26.019 true 00:14:26.019 22:41:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:26.019 22:41:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.279 22:41:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.279 22:41:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:14:26.279 22:41:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:26.540 true 00:14:26.540 22:41:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:26.540 22:41:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.799 22:41:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.799 22:41:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:14:26.799 22:41:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:27.060 true 00:14:27.060 22:41:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:27.060 22:41:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.999 22:41:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.999 22:41:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:14:27.999 22:41:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:28.259 true 00:14:28.259 22:41:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:28.259 22:41:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.519 22:41:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.519 22:41:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:14:28.519 22:41:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:28.779 true 00:14:28.779 22:41:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:28.779 22:41:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.779 22:41:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.040 22:41:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:14:29.040 22:41:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:29.300 true 00:14:29.300 22:41:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:29.300 22:41:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.300 22:41:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.561 22:41:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:14:29.561 22:41:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:29.561 true 00:14:29.821 22:41:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:29.821 22:41:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.821 22:41:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:30.082 22:41:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:14:30.082 22:41:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:30.082 true 00:14:30.082 22:41:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:30.082 22:41:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.342 22:41:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:30.603 22:41:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:14:30.603 22:41:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:30.603 true 00:14:30.603 22:41:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:30.603 22:41:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.863 22:41:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.123 22:41:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:14:31.123 22:41:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:31.123 true 00:14:31.123 22:41:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:31.123 22:41:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:32.065 22:41:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.326 22:41:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:14:32.326 22:41:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:32.326 true 00:14:32.326 22:41:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:32.326 22:41:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.587 22:41:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.587 22:41:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:14:32.587 22:41:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:32.848 true 00:14:32.848 22:41:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:32.848 22:41:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.109 22:41:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.109 22:41:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:14:33.109 22:41:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:33.369 true 00:14:33.369 22:41:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:33.369 22:41:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.311 22:41:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.311 22:41:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:14:34.311 22:41:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:34.571 true 00:14:34.571 22:41:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:34.571 22:41:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.571 22:41:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.831 22:41:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:14:34.831 22:41:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:35.092 true 00:14:35.092 22:41:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:35.092 22:41:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.092 22:41:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.353 22:41:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:14:35.353 22:41:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:35.353 true 00:14:35.353 22:41:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:35.353 22:41:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.623 22:41:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.962 22:41:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:14:35.962 22:41:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:35.962 true 00:14:35.962 22:41:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:35.962 22:41:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.221 22:41:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.221 22:41:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:14:36.222 22:41:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:36.481 true 00:14:36.481 22:41:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:36.481 22:41:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.421 22:41:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.421 22:41:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:14:37.421 22:41:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:37.681 true 00:14:37.681 22:41:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:37.681 22:41:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.942 22:41:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.942 22:41:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:14:37.942 22:41:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:38.203 true 00:14:38.203 22:41:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:38.203 22:41:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.203 22:41:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.465 22:41:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:14:38.465 22:41:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:38.726 true 00:14:38.726 22:41:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:38.726 22:41:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.726 22:41:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.986 22:41:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:14:38.986 22:41:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:38.986 true 00:14:38.986 22:41:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:38.986 22:41:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.246 22:41:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:39.507 22:41:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:14:39.507 22:41:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:39.507 true 00:14:39.507 22:41:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:39.507 22:41:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.768 22:41:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.029 22:41:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:14:40.029 22:41:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:40.029 true 00:14:40.029 22:41:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:40.029 22:41:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.289 22:41:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.549 22:41:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:14:40.549 22:41:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:40.549 true 00:14:40.549 22:41:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:40.549 22:41:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.812 22:41:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.812 22:41:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:14:40.812 22:41:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:41.073 true 00:14:41.073 22:41:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:41.073 22:41:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.334 22:41:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.334 22:41:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:14:41.334 22:41:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:41.595 true 00:14:41.595 22:41:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:41.595 22:41:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:42.536 22:41:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.536 22:41:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:14:42.536 22:41:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:42.796 true 00:14:42.796 22:41:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:42.796 22:41:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.056 22:41:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.056 22:41:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:14:43.056 22:41:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:43.317 true 00:14:43.317 22:41:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:43.317 22:41:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.317 22:41:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.577 22:41:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:14:43.577 22:41:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:43.838 true 00:14:43.838 22:41:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:43.838 22:41:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.838 22:41:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.098 22:41:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:14:44.098 22:41:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:44.098 true 00:14:44.359 22:41:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:44.359 22:41:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.359 22:41:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.620 22:41:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:14:44.620 22:41:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:44.620 true 00:14:44.620 22:41:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:44.620 22:41:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.562 22:41:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:45.822 22:41:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:14:45.822 22:41:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:45.822 true 00:14:45.822 22:41:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:45.822 22:41:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.083 22:41:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.345 22:41:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:14:46.345 22:41:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:14:46.345 true 00:14:46.345 22:41:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:46.345 22:41:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.605 22:41:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.867 22:41:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:14:46.867 22:41:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:14:46.867 true 00:14:46.867 22:41:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:46.867 22:41:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.128 22:41:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.128 22:41:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:14:47.128 22:41:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:14:47.389 true 00:14:47.389 22:41:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:47.389 22:41:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.649 22:41:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.649 22:41:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:14:47.650 22:41:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:14:47.910 true 00:14:47.910 22:41:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:47.910 22:41:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.171 22:41:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.171 22:41:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:14:48.171 22:41:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:14:48.432 true 00:14:48.432 22:41:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:48.432 22:41:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.692 22:41:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.692 22:41:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:14:48.692 22:41:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:14:48.954 true 00:14:48.954 22:41:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:48.954 22:41:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.954 22:41:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.215 22:41:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:14:49.215 22:41:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:14:49.477 true 00:14:49.477 22:41:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:49.477 22:41:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.477 22:41:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.738 22:41:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:14:49.738 22:41:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:14:49.738 true 00:14:49.999 22:41:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:49.999 22:41:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.941 22:41:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.941 22:41:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1047 00:14:50.941 22:41:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:14:50.941 true 00:14:51.202 22:41:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:51.202 22:41:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.202 22:41:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.463 22:41:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1048 00:14:51.463 22:41:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:14:51.463 true 00:14:51.463 22:41:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:51.463 22:41:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.724 Initializing NVMe Controllers 00:14:51.724 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:51.724 Controller IO queue size 128, less than required. 00:14:51.724 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:51.724 Controller IO queue size 128, less than required. 00:14:51.724 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:51.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:51.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:51.724 Initialization complete. Launching workers. 00:14:51.724 ======================================================== 00:14:51.724 Latency(us) 00:14:51.724 Device Information : IOPS MiB/s Average min max 00:14:51.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 311.05 0.15 135895.14 2489.39 1137573.90 00:14:51.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11026.96 5.38 11608.52 1672.58 402446.09 00:14:51.724 ======================================================== 00:14:51.724 Total : 11338.01 5.54 15018.20 1672.58 1137573.90 00:14:51.724 00:14:51.724 22:41:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.985 22:41:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1049 00:14:51.985 22:41:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:14:51.985 true 00:14:51.985 22:41:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1030240 00:14:51.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (1030240) - No such process 00:14:51.985 22:41:36 -- target/ns_hotplug_stress.sh@44 -- # wait 1030240 00:14:51.985 22:41:36 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:51.985 22:41:36 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:14:51.985 22:41:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:51.985 22:41:36 -- nvmf/common.sh@116 -- # sync 00:14:51.985 22:41:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:51.985 22:41:36 -- nvmf/common.sh@119 -- # set +e 00:14:51.985 22:41:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:51.985 22:41:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:51.985 rmmod nvme_tcp 00:14:51.985 rmmod nvme_fabrics 00:14:52.246 rmmod nvme_keyring 00:14:52.246 22:41:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:52.246 22:41:36 -- nvmf/common.sh@123 -- # set -e 00:14:52.246 22:41:36 -- nvmf/common.sh@124 -- # return 0 00:14:52.246 22:41:36 -- nvmf/common.sh@477 -- # '[' -n 1029805 ']' 00:14:52.246 22:41:36 -- nvmf/common.sh@478 -- # killprocess 1029805 00:14:52.246 22:41:36 -- common/autotest_common.sh@926 -- # '[' -z 1029805 ']' 00:14:52.246 22:41:36 -- common/autotest_common.sh@930 -- # kill -0 1029805 00:14:52.246 22:41:36 -- common/autotest_common.sh@931 -- # uname 00:14:52.246 22:41:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:52.246 22:41:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1029805 00:14:52.246 22:41:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:52.246 22:41:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:52.246 22:41:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1029805' 00:14:52.246 killing process with pid 1029805 00:14:52.246 22:41:36 -- common/autotest_common.sh@945 -- # kill 1029805 00:14:52.246 22:41:36 -- common/autotest_common.sh@950 -- # wait 1029805 00:14:52.246 22:41:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:52.246 22:41:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:52.246 22:41:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:52.246 22:41:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:52.246 22:41:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:52.246 22:41:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.246 22:41:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.246 22:41:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.795 22:41:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:54.795 00:14:54.795 real 0m43.385s 00:14:54.795 user 2m31.278s 00:14:54.795 sys 0m11.623s 00:14:54.795 22:41:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.795 22:41:39 -- common/autotest_common.sh@10 -- # set +x 00:14:54.795 ************************************ 00:14:54.795 END TEST nvmf_ns_hotplug_stress 00:14:54.795 ************************************ 00:14:54.795 22:41:39 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:54.795 22:41:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:54.795 22:41:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:54.795 22:41:39 -- common/autotest_common.sh@10 -- # set +x 00:14:54.795 ************************************ 00:14:54.795 START TEST nvmf_connect_stress 00:14:54.795 ************************************ 00:14:54.795 22:41:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:54.795 * Looking for test storage... 00:14:54.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:54.795 22:41:39 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:54.795 22:41:39 -- nvmf/common.sh@7 -- # uname -s 00:14:54.795 22:41:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.795 22:41:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.795 22:41:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.795 22:41:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.795 22:41:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.795 22:41:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.795 22:41:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.795 22:41:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.795 22:41:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.795 22:41:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.795 22:41:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:54.795 22:41:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:54.795 22:41:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.795 22:41:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.795 22:41:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:54.795 22:41:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:54.795 22:41:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.795 22:41:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.795 22:41:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.796 22:41:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.796 22:41:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.796 22:41:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.796 22:41:39 -- paths/export.sh@5 -- # export PATH 00:14:54.796 22:41:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.796 22:41:39 -- nvmf/common.sh@46 -- # : 0 00:14:54.796 22:41:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:54.796 22:41:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:54.796 22:41:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:54.796 22:41:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.796 22:41:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.796 22:41:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:54.796 22:41:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:54.796 22:41:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:54.796 22:41:39 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:54.796 22:41:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:54.796 22:41:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.796 22:41:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:54.796 22:41:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:54.796 22:41:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:54.796 22:41:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.796 22:41:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.796 22:41:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.796 22:41:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:54.796 22:41:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:54.796 22:41:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:54.796 22:41:39 -- common/autotest_common.sh@10 -- # set +x 00:15:02.967 22:41:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:02.967 22:41:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:02.967 22:41:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:02.967 22:41:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:02.967 22:41:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:02.967 22:41:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:02.967 22:41:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:02.967 22:41:46 -- nvmf/common.sh@294 -- # net_devs=() 00:15:02.967 22:41:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:02.967 22:41:46 -- nvmf/common.sh@295 -- # e810=() 00:15:02.967 22:41:46 -- nvmf/common.sh@295 -- # local -ga e810 00:15:02.967 22:41:46 -- nvmf/common.sh@296 -- # x722=() 00:15:02.967 22:41:46 -- nvmf/common.sh@296 -- # local -ga x722 00:15:02.967 22:41:46 -- nvmf/common.sh@297 -- # mlx=() 00:15:02.967 22:41:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:02.967 22:41:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:02.967 22:41:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:02.967 22:41:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:02.967 22:41:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:02.967 22:41:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:02.967 22:41:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:02.967 22:41:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:02.967 22:41:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:02.967 22:41:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:02.967 22:41:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:02.967 22:41:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:02.967 22:41:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:02.967 22:41:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:02.967 22:41:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:02.967 22:41:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:02.967 22:41:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:02.967 22:41:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:02.967 22:41:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:02.967 22:41:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:02.967 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:02.967 22:41:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:02.967 22:41:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:02.967 22:41:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.967 22:41:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.967 22:41:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:02.967 22:41:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:02.967 22:41:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:02.967 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:02.967 22:41:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:02.967 22:41:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:02.967 22:41:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.967 22:41:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.967 22:41:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:02.967 22:41:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:02.967 22:41:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:02.967 22:41:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:02.967 22:41:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:02.967 22:41:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.967 22:41:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:02.967 22:41:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.967 22:41:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:02.967 Found net devices under 0000:31:00.0: cvl_0_0 00:15:02.967 22:41:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.967 22:41:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:02.967 22:41:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.967 22:41:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:02.967 22:41:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.967 22:41:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:02.967 Found net devices under 0000:31:00.1: cvl_0_1 00:15:02.967 22:41:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.967 22:41:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:02.967 22:41:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:02.967 22:41:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:02.967 22:41:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:02.967 22:41:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:02.967 22:41:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.967 22:41:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.967 22:41:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:02.967 22:41:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:02.967 22:41:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:02.967 22:41:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:02.967 22:41:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:02.967 22:41:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:02.967 22:41:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.967 22:41:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:02.967 22:41:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:02.967 22:41:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:02.967 22:41:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:02.967 22:41:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:02.967 22:41:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:02.967 22:41:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:02.967 22:41:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:02.967 22:41:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:02.967 22:41:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:02.967 22:41:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:02.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:15:02.967 00:15:02.967 --- 10.0.0.2 ping statistics --- 00:15:02.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.967 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:15:02.967 22:41:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:02.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:15:02.967 00:15:02.967 --- 10.0.0.1 ping statistics --- 00:15:02.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.967 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:15:02.967 22:41:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.967 22:41:47 -- nvmf/common.sh@410 -- # return 0 00:15:02.967 22:41:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:02.967 22:41:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.967 22:41:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:02.967 22:41:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:02.967 22:41:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.967 22:41:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:02.967 22:41:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:02.967 22:41:47 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:02.967 22:41:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:02.967 22:41:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:02.967 22:41:47 -- common/autotest_common.sh@10 -- # set +x 00:15:02.967 22:41:47 -- nvmf/common.sh@469 -- # nvmfpid=1041166 00:15:02.967 22:41:47 -- nvmf/common.sh@470 -- # waitforlisten 1041166 00:15:02.967 22:41:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:02.967 22:41:47 -- common/autotest_common.sh@819 -- # '[' -z 1041166 ']' 00:15:02.967 22:41:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.967 22:41:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:02.968 22:41:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.968 22:41:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:02.968 22:41:47 -- common/autotest_common.sh@10 -- # set +x 00:15:02.968 [2024-04-15 22:41:47.383539] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:15:02.968 [2024-04-15 22:41:47.383657] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.968 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.968 [2024-04-15 22:41:47.471583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:02.968 [2024-04-15 22:41:47.542070] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:02.968 [2024-04-15 22:41:47.542197] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.968 [2024-04-15 22:41:47.542206] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.968 [2024-04-15 22:41:47.542213] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.968 [2024-04-15 22:41:47.542341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.968 [2024-04-15 22:41:47.542496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.968 [2024-04-15 22:41:47.542498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:03.540 22:41:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:03.540 22:41:48 -- common/autotest_common.sh@852 -- # return 0 00:15:03.540 22:41:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:03.540 22:41:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:03.540 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:15:03.540 22:41:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.540 22:41:48 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:03.540 22:41:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.540 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:15:03.540 [2024-04-15 22:41:48.210437] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.540 22:41:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.540 22:41:48 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:03.540 22:41:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.540 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:15:03.540 22:41:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.540 22:41:48 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:03.540 22:41:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.540 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:15:03.540 [2024-04-15 22:41:48.251673] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:03.540 22:41:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.540 22:41:48 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:03.540 22:41:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.540 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:15:03.540 NULL1 00:15:03.540 22:41:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.540 22:41:48 -- target/connect_stress.sh@21 -- # PERF_PID=1041513 00:15:03.540 22:41:48 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:03.540 22:41:48 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:03.540 22:41:48 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:03.540 22:41:48 -- target/connect_stress.sh@27 -- # seq 1 20 00:15:03.540 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.540 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.540 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.540 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.540 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.540 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.540 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.540 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.540 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.540 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.540 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.540 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.540 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.540 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.540 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.540 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.540 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.540 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.540 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.540 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.540 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.540 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.540 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.540 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.540 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.540 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.540 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.540 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.540 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.540 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.540 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.540 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.540 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.540 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.540 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.802 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.802 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.802 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.802 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.802 22:41:48 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:03.802 22:41:48 -- target/connect_stress.sh@28 -- # cat 00:15:03.802 22:41:48 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:03.802 22:41:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.802 22:41:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.802 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:15:04.063 22:41:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.063 22:41:48 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:04.063 22:41:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.063 22:41:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.063 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:15:04.324 22:41:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.324 22:41:49 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:04.324 22:41:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.324 22:41:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.324 22:41:49 -- common/autotest_common.sh@10 -- # set +x 00:15:04.584 22:41:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.584 22:41:49 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:04.584 22:41:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.584 22:41:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.584 22:41:49 -- common/autotest_common.sh@10 -- # set +x 00:15:05.155 22:41:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.155 22:41:49 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:05.155 22:41:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.155 22:41:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.155 22:41:49 -- common/autotest_common.sh@10 -- # set +x 00:15:05.415 22:41:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.415 22:41:49 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:05.415 22:41:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.415 22:41:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.415 22:41:49 -- common/autotest_common.sh@10 -- # set +x 00:15:05.675 22:41:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.675 22:41:50 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:05.675 22:41:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.675 22:41:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.675 22:41:50 -- common/autotest_common.sh@10 -- # set +x 00:15:05.936 22:41:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.936 22:41:50 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:05.936 22:41:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.936 22:41:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.936 22:41:50 -- common/autotest_common.sh@10 -- # set +x 00:15:06.197 22:41:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.197 22:41:50 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:06.197 22:41:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.197 22:41:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.197 22:41:50 -- common/autotest_common.sh@10 -- # set +x 00:15:06.767 22:41:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.767 22:41:51 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:06.767 22:41:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.767 22:41:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.767 22:41:51 -- common/autotest_common.sh@10 -- # set +x 00:15:07.027 22:41:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:07.027 22:41:51 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:07.027 22:41:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.027 22:41:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:07.027 22:41:51 -- common/autotest_common.sh@10 -- # set +x 00:15:07.287 22:41:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:07.287 22:41:51 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:07.287 22:41:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.287 22:41:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:07.287 22:41:51 -- common/autotest_common.sh@10 -- # set +x 00:15:07.549 22:41:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:07.549 22:41:52 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:07.549 22:41:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.549 22:41:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:07.549 22:41:52 -- common/autotest_common.sh@10 -- # set +x 00:15:07.809 22:41:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:07.809 22:41:52 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:07.809 22:41:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.809 22:41:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:07.809 22:41:52 -- common/autotest_common.sh@10 -- # set +x 00:15:08.380 22:41:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.380 22:41:52 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:08.380 22:41:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.380 22:41:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.380 22:41:52 -- common/autotest_common.sh@10 -- # set +x 00:15:08.641 22:41:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.641 22:41:53 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:08.641 22:41:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.641 22:41:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.641 22:41:53 -- common/autotest_common.sh@10 -- # set +x 00:15:08.903 22:41:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.903 22:41:53 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:08.903 22:41:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.903 22:41:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.903 22:41:53 -- common/autotest_common.sh@10 -- # set +x 00:15:09.163 22:41:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.163 22:41:53 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:09.163 22:41:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.163 22:41:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.163 22:41:53 -- common/autotest_common.sh@10 -- # set +x 00:15:09.424 22:41:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.424 22:41:54 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:09.424 22:41:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.424 22:41:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.424 22:41:54 -- common/autotest_common.sh@10 -- # set +x 00:15:09.996 22:41:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.996 22:41:54 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:09.996 22:41:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.996 22:41:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.996 22:41:54 -- common/autotest_common.sh@10 -- # set +x 00:15:10.257 22:41:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.257 22:41:54 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:10.257 22:41:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.257 22:41:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.257 22:41:54 -- common/autotest_common.sh@10 -- # set +x 00:15:10.518 22:41:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.518 22:41:55 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:10.518 22:41:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.518 22:41:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.518 22:41:55 -- common/autotest_common.sh@10 -- # set +x 00:15:10.779 22:41:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.779 22:41:55 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:10.779 22:41:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.779 22:41:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.779 22:41:55 -- common/autotest_common.sh@10 -- # set +x 00:15:11.351 22:41:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.351 22:41:55 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:11.351 22:41:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.351 22:41:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.351 22:41:55 -- common/autotest_common.sh@10 -- # set +x 00:15:11.612 22:41:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.612 22:41:56 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:11.612 22:41:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.612 22:41:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.612 22:41:56 -- common/autotest_common.sh@10 -- # set +x 00:15:11.874 22:41:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.874 22:41:56 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:11.874 22:41:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.874 22:41:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.874 22:41:56 -- common/autotest_common.sh@10 -- # set +x 00:15:12.135 22:41:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.135 22:41:56 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:12.135 22:41:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.136 22:41:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.136 22:41:56 -- common/autotest_common.sh@10 -- # set +x 00:15:12.396 22:41:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.396 22:41:57 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:12.396 22:41:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.396 22:41:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.396 22:41:57 -- common/autotest_common.sh@10 -- # set +x 00:15:12.969 22:41:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.969 22:41:57 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:12.969 22:41:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.969 22:41:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.969 22:41:57 -- common/autotest_common.sh@10 -- # set +x 00:15:13.229 22:41:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.229 22:41:57 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:13.229 22:41:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.229 22:41:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.229 22:41:57 -- common/autotest_common.sh@10 -- # set +x 00:15:13.490 22:41:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.490 22:41:58 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:13.490 22:41:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.490 22:41:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.490 22:41:58 -- common/autotest_common.sh@10 -- # set +x 00:15:13.750 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:13.750 22:41:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.750 22:41:58 -- target/connect_stress.sh@34 -- # kill -0 1041513 00:15:13.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1041513) - No such process 00:15:13.750 22:41:58 -- target/connect_stress.sh@38 -- # wait 1041513 00:15:13.750 22:41:58 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:13.750 22:41:58 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:13.750 22:41:58 -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:13.750 22:41:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:13.750 22:41:58 -- nvmf/common.sh@116 -- # sync 00:15:13.750 22:41:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:13.750 22:41:58 -- nvmf/common.sh@119 -- # set +e 00:15:13.750 22:41:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:13.750 22:41:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:13.750 rmmod nvme_tcp 00:15:13.750 rmmod nvme_fabrics 00:15:13.750 rmmod nvme_keyring 00:15:13.750 22:41:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:13.750 22:41:58 -- nvmf/common.sh@123 -- # set -e 00:15:13.750 22:41:58 -- nvmf/common.sh@124 -- # return 0 00:15:13.750 22:41:58 -- nvmf/common.sh@477 -- # '[' -n 1041166 ']' 00:15:13.750 22:41:58 -- nvmf/common.sh@478 -- # killprocess 1041166 00:15:13.750 22:41:58 -- common/autotest_common.sh@926 -- # '[' -z 1041166 ']' 00:15:13.750 22:41:58 -- common/autotest_common.sh@930 -- # kill -0 1041166 00:15:13.751 22:41:58 -- common/autotest_common.sh@931 -- # uname 00:15:13.751 22:41:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:13.751 22:41:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1041166 00:15:14.012 22:41:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:14.012 22:41:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:14.012 22:41:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1041166' 00:15:14.012 killing process with pid 1041166 00:15:14.012 22:41:58 -- common/autotest_common.sh@945 -- # kill 1041166 00:15:14.012 22:41:58 -- common/autotest_common.sh@950 -- # wait 1041166 00:15:14.012 22:41:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:14.012 22:41:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:14.012 22:41:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:14.012 22:41:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:14.012 22:41:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:14.012 22:41:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.012 22:41:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.012 22:41:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.558 22:42:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:16.558 00:15:16.558 real 0m21.659s 00:15:16.558 user 0m42.466s 00:15:16.558 sys 0m9.190s 00:15:16.558 22:42:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.558 22:42:00 -- common/autotest_common.sh@10 -- # set +x 00:15:16.558 ************************************ 00:15:16.558 END TEST nvmf_connect_stress 00:15:16.558 ************************************ 00:15:16.558 22:42:00 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:16.558 22:42:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:16.558 22:42:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:16.558 22:42:00 -- common/autotest_common.sh@10 -- # set +x 00:15:16.558 ************************************ 00:15:16.558 START TEST nvmf_fused_ordering 00:15:16.558 ************************************ 00:15:16.558 22:42:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:16.558 * Looking for test storage... 00:15:16.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.558 22:42:00 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.558 22:42:00 -- nvmf/common.sh@7 -- # uname -s 00:15:16.558 22:42:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.558 22:42:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.558 22:42:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.558 22:42:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.558 22:42:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.558 22:42:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.558 22:42:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.558 22:42:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.558 22:42:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.558 22:42:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.558 22:42:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:16.558 22:42:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:16.558 22:42:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.558 22:42:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.558 22:42:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.558 22:42:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.559 22:42:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.559 22:42:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.559 22:42:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.559 22:42:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.559 22:42:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.559 22:42:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.559 22:42:00 -- paths/export.sh@5 -- # export PATH 00:15:16.559 22:42:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.559 22:42:00 -- nvmf/common.sh@46 -- # : 0 00:15:16.559 22:42:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:16.559 22:42:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:16.559 22:42:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:16.559 22:42:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.559 22:42:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.559 22:42:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:16.559 22:42:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:16.559 22:42:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:16.559 22:42:00 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:16.559 22:42:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:16.559 22:42:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.559 22:42:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:16.559 22:42:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:16.559 22:42:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:16.559 22:42:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.559 22:42:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.559 22:42:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.559 22:42:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:16.559 22:42:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:16.559 22:42:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:16.559 22:42:00 -- common/autotest_common.sh@10 -- # set +x 00:15:24.701 22:42:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:24.701 22:42:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:24.701 22:42:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:24.701 22:42:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:24.701 22:42:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:24.701 22:42:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:24.701 22:42:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:24.701 22:42:08 -- nvmf/common.sh@294 -- # net_devs=() 00:15:24.701 22:42:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:24.701 22:42:08 -- nvmf/common.sh@295 -- # e810=() 00:15:24.701 22:42:08 -- nvmf/common.sh@295 -- # local -ga e810 00:15:24.701 22:42:08 -- nvmf/common.sh@296 -- # x722=() 00:15:24.701 22:42:08 -- nvmf/common.sh@296 -- # local -ga x722 00:15:24.701 22:42:08 -- nvmf/common.sh@297 -- # mlx=() 00:15:24.701 22:42:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:24.701 22:42:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:24.701 22:42:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:24.701 22:42:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:24.701 22:42:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:24.701 22:42:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:24.701 22:42:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:24.701 22:42:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:24.701 22:42:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:24.701 22:42:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:24.701 22:42:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:24.701 22:42:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:24.701 22:42:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:24.701 22:42:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:24.701 22:42:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:24.701 22:42:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:24.701 22:42:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:24.701 22:42:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:24.701 22:42:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:24.701 22:42:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:24.701 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:24.701 22:42:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:24.701 22:42:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:24.701 22:42:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.701 22:42:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.701 22:42:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:24.701 22:42:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:24.701 22:42:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:24.701 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:24.701 22:42:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:24.701 22:42:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:24.701 22:42:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.701 22:42:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.701 22:42:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:24.701 22:42:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:24.701 22:42:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:24.701 22:42:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:24.701 22:42:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:24.701 22:42:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.701 22:42:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:24.701 22:42:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.701 22:42:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:24.701 Found net devices under 0000:31:00.0: cvl_0_0 00:15:24.701 22:42:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.701 22:42:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:24.701 22:42:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.701 22:42:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:24.701 22:42:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.701 22:42:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:24.701 Found net devices under 0000:31:00.1: cvl_0_1 00:15:24.701 22:42:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.701 22:42:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:24.701 22:42:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:24.701 22:42:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:24.701 22:42:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:24.701 22:42:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:24.701 22:42:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:24.701 22:42:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:24.701 22:42:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:24.701 22:42:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:24.701 22:42:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:24.701 22:42:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:24.701 22:42:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:24.701 22:42:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:24.701 22:42:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:24.701 22:42:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:24.701 22:42:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:24.702 22:42:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:24.702 22:42:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:24.702 22:42:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:24.702 22:42:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:24.702 22:42:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:24.702 22:42:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:24.702 22:42:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:24.702 22:42:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:24.702 22:42:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:24.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:15:24.702 00:15:24.702 --- 10.0.0.2 ping statistics --- 00:15:24.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.702 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:15:24.702 22:42:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:24.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.384 ms 00:15:24.702 00:15:24.702 --- 10.0.0.1 ping statistics --- 00:15:24.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.702 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:15:24.702 22:42:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.702 22:42:09 -- nvmf/common.sh@410 -- # return 0 00:15:24.702 22:42:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:24.702 22:42:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.702 22:42:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:24.702 22:42:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:24.702 22:42:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.702 22:42:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:24.702 22:42:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:24.702 22:42:09 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:24.702 22:42:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:24.702 22:42:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:24.702 22:42:09 -- common/autotest_common.sh@10 -- # set +x 00:15:24.702 22:42:09 -- nvmf/common.sh@469 -- # nvmfpid=1048824 00:15:24.702 22:42:09 -- nvmf/common.sh@470 -- # waitforlisten 1048824 00:15:24.702 22:42:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:24.702 22:42:09 -- common/autotest_common.sh@819 -- # '[' -z 1048824 ']' 00:15:24.702 22:42:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.702 22:42:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:24.702 22:42:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.702 22:42:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:24.702 22:42:09 -- common/autotest_common.sh@10 -- # set +x 00:15:24.702 [2024-04-15 22:42:09.131597] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:15:24.702 [2024-04-15 22:42:09.131660] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.702 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.702 [2024-04-15 22:42:09.209663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.702 [2024-04-15 22:42:09.281070] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:24.702 [2024-04-15 22:42:09.281194] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.702 [2024-04-15 22:42:09.281205] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.702 [2024-04-15 22:42:09.281212] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.702 [2024-04-15 22:42:09.281229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.273 22:42:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:25.273 22:42:09 -- common/autotest_common.sh@852 -- # return 0 00:15:25.273 22:42:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:25.273 22:42:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:25.273 22:42:09 -- common/autotest_common.sh@10 -- # set +x 00:15:25.273 22:42:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.273 22:42:09 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:25.273 22:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.273 22:42:09 -- common/autotest_common.sh@10 -- # set +x 00:15:25.273 [2024-04-15 22:42:09.936170] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.273 22:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.273 22:42:09 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:25.273 22:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.273 22:42:09 -- common/autotest_common.sh@10 -- # set +x 00:15:25.273 22:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.273 22:42:09 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.273 22:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.273 22:42:09 -- common/autotest_common.sh@10 -- # set +x 00:15:25.273 [2024-04-15 22:42:09.960304] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.273 22:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.273 22:42:09 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:25.273 22:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.273 22:42:09 -- common/autotest_common.sh@10 -- # set +x 00:15:25.273 NULL1 00:15:25.273 22:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.273 22:42:09 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:25.273 22:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.273 22:42:09 -- common/autotest_common.sh@10 -- # set +x 00:15:25.273 22:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.273 22:42:09 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:25.273 22:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.273 22:42:09 -- common/autotest_common.sh@10 -- # set +x 00:15:25.273 22:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.273 22:42:09 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:25.273 [2024-04-15 22:42:10.024136] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:15:25.273 [2024-04-15 22:42:10.024198] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1048857 ] 00:15:25.273 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.845 Attached to nqn.2016-06.io.spdk:cnode1 00:15:25.845 Namespace ID: 1 size: 1GB 00:15:25.845 fused_ordering(0) 00:15:25.845 fused_ordering(1) 00:15:25.845 fused_ordering(2) 00:15:25.845 fused_ordering(3) 00:15:25.845 fused_ordering(4) 00:15:25.845 fused_ordering(5) 00:15:25.845 fused_ordering(6) 00:15:25.845 fused_ordering(7) 00:15:25.845 fused_ordering(8) 00:15:25.845 fused_ordering(9) 00:15:25.845 fused_ordering(10) 00:15:25.845 fused_ordering(11) 00:15:25.845 fused_ordering(12) 00:15:25.845 fused_ordering(13) 00:15:25.845 fused_ordering(14) 00:15:25.845 fused_ordering(15) 00:15:25.845 fused_ordering(16) 00:15:25.845 fused_ordering(17) 00:15:25.845 fused_ordering(18) 00:15:25.845 fused_ordering(19) 00:15:25.845 fused_ordering(20) 00:15:25.845 fused_ordering(21) 00:15:25.845 fused_ordering(22) 00:15:25.845 fused_ordering(23) 00:15:25.845 fused_ordering(24) 00:15:25.845 fused_ordering(25) 00:15:25.845 fused_ordering(26) 00:15:25.845 fused_ordering(27) 00:15:25.845 fused_ordering(28) 00:15:25.845 fused_ordering(29) 00:15:25.845 fused_ordering(30) 00:15:25.845 fused_ordering(31) 00:15:25.845 fused_ordering(32) 00:15:25.845 fused_ordering(33) 00:15:25.845 fused_ordering(34) 00:15:25.845 fused_ordering(35) 00:15:25.845 fused_ordering(36) 00:15:25.845 fused_ordering(37) 00:15:25.845 fused_ordering(38) 00:15:25.845 fused_ordering(39) 00:15:25.845 fused_ordering(40) 00:15:25.845 fused_ordering(41) 00:15:25.845 fused_ordering(42) 00:15:25.845 fused_ordering(43) 00:15:25.845 fused_ordering(44) 00:15:25.845 fused_ordering(45) 00:15:25.845 fused_ordering(46) 00:15:25.845 fused_ordering(47) 00:15:25.845 fused_ordering(48) 00:15:25.845 fused_ordering(49) 00:15:25.845 fused_ordering(50) 00:15:25.845 fused_ordering(51) 00:15:25.845 fused_ordering(52) 00:15:25.845 fused_ordering(53) 00:15:25.845 fused_ordering(54) 00:15:25.845 fused_ordering(55) 00:15:25.845 fused_ordering(56) 00:15:25.845 fused_ordering(57) 00:15:25.845 fused_ordering(58) 00:15:25.845 fused_ordering(59) 00:15:25.845 fused_ordering(60) 00:15:25.845 fused_ordering(61) 00:15:25.845 fused_ordering(62) 00:15:25.845 fused_ordering(63) 00:15:25.845 fused_ordering(64) 00:15:25.845 fused_ordering(65) 00:15:25.845 fused_ordering(66) 00:15:25.845 fused_ordering(67) 00:15:25.845 fused_ordering(68) 00:15:25.845 fused_ordering(69) 00:15:25.845 fused_ordering(70) 00:15:25.845 fused_ordering(71) 00:15:25.845 fused_ordering(72) 00:15:25.845 fused_ordering(73) 00:15:25.845 fused_ordering(74) 00:15:25.845 fused_ordering(75) 00:15:25.845 fused_ordering(76) 00:15:25.845 fused_ordering(77) 00:15:25.845 fused_ordering(78) 00:15:25.845 fused_ordering(79) 00:15:25.845 fused_ordering(80) 00:15:25.845 fused_ordering(81) 00:15:25.845 fused_ordering(82) 00:15:25.845 fused_ordering(83) 00:15:25.845 fused_ordering(84) 00:15:25.845 fused_ordering(85) 00:15:25.845 fused_ordering(86) 00:15:25.845 fused_ordering(87) 00:15:25.845 fused_ordering(88) 00:15:25.845 fused_ordering(89) 00:15:25.845 fused_ordering(90) 00:15:25.845 fused_ordering(91) 00:15:25.845 fused_ordering(92) 00:15:25.845 fused_ordering(93) 00:15:25.845 fused_ordering(94) 00:15:25.845 fused_ordering(95) 00:15:25.845 fused_ordering(96) 00:15:25.845 fused_ordering(97) 00:15:25.845 fused_ordering(98) 00:15:25.845 fused_ordering(99) 00:15:25.845 fused_ordering(100) 00:15:25.845 fused_ordering(101) 00:15:25.845 fused_ordering(102) 00:15:25.845 fused_ordering(103) 00:15:25.845 fused_ordering(104) 00:15:25.845 fused_ordering(105) 00:15:25.845 fused_ordering(106) 00:15:25.845 fused_ordering(107) 00:15:25.845 fused_ordering(108) 00:15:25.845 fused_ordering(109) 00:15:25.845 fused_ordering(110) 00:15:25.845 fused_ordering(111) 00:15:25.845 fused_ordering(112) 00:15:25.845 fused_ordering(113) 00:15:25.845 fused_ordering(114) 00:15:25.845 fused_ordering(115) 00:15:25.845 fused_ordering(116) 00:15:25.845 fused_ordering(117) 00:15:25.845 fused_ordering(118) 00:15:25.845 fused_ordering(119) 00:15:25.845 fused_ordering(120) 00:15:25.845 fused_ordering(121) 00:15:25.845 fused_ordering(122) 00:15:25.845 fused_ordering(123) 00:15:25.845 fused_ordering(124) 00:15:25.845 fused_ordering(125) 00:15:25.845 fused_ordering(126) 00:15:25.845 fused_ordering(127) 00:15:25.845 fused_ordering(128) 00:15:25.845 fused_ordering(129) 00:15:25.845 fused_ordering(130) 00:15:25.845 fused_ordering(131) 00:15:25.845 fused_ordering(132) 00:15:25.845 fused_ordering(133) 00:15:25.845 fused_ordering(134) 00:15:25.845 fused_ordering(135) 00:15:25.845 fused_ordering(136) 00:15:25.845 fused_ordering(137) 00:15:25.845 fused_ordering(138) 00:15:25.845 fused_ordering(139) 00:15:25.845 fused_ordering(140) 00:15:25.845 fused_ordering(141) 00:15:25.845 fused_ordering(142) 00:15:25.845 fused_ordering(143) 00:15:25.845 fused_ordering(144) 00:15:25.845 fused_ordering(145) 00:15:25.845 fused_ordering(146) 00:15:25.845 fused_ordering(147) 00:15:25.845 fused_ordering(148) 00:15:25.845 fused_ordering(149) 00:15:25.845 fused_ordering(150) 00:15:25.845 fused_ordering(151) 00:15:25.845 fused_ordering(152) 00:15:25.845 fused_ordering(153) 00:15:25.845 fused_ordering(154) 00:15:25.845 fused_ordering(155) 00:15:25.845 fused_ordering(156) 00:15:25.845 fused_ordering(157) 00:15:25.845 fused_ordering(158) 00:15:25.845 fused_ordering(159) 00:15:25.845 fused_ordering(160) 00:15:25.845 fused_ordering(161) 00:15:25.845 fused_ordering(162) 00:15:25.845 fused_ordering(163) 00:15:25.845 fused_ordering(164) 00:15:25.845 fused_ordering(165) 00:15:25.845 fused_ordering(166) 00:15:25.845 fused_ordering(167) 00:15:25.845 fused_ordering(168) 00:15:25.845 fused_ordering(169) 00:15:25.845 fused_ordering(170) 00:15:25.845 fused_ordering(171) 00:15:25.845 fused_ordering(172) 00:15:25.845 fused_ordering(173) 00:15:25.845 fused_ordering(174) 00:15:25.845 fused_ordering(175) 00:15:25.845 fused_ordering(176) 00:15:25.845 fused_ordering(177) 00:15:25.845 fused_ordering(178) 00:15:25.845 fused_ordering(179) 00:15:25.845 fused_ordering(180) 00:15:25.845 fused_ordering(181) 00:15:25.845 fused_ordering(182) 00:15:25.845 fused_ordering(183) 00:15:25.845 fused_ordering(184) 00:15:25.845 fused_ordering(185) 00:15:25.845 fused_ordering(186) 00:15:25.845 fused_ordering(187) 00:15:25.845 fused_ordering(188) 00:15:25.845 fused_ordering(189) 00:15:25.845 fused_ordering(190) 00:15:25.845 fused_ordering(191) 00:15:25.845 fused_ordering(192) 00:15:25.845 fused_ordering(193) 00:15:25.845 fused_ordering(194) 00:15:25.845 fused_ordering(195) 00:15:25.845 fused_ordering(196) 00:15:25.845 fused_ordering(197) 00:15:25.845 fused_ordering(198) 00:15:25.845 fused_ordering(199) 00:15:25.845 fused_ordering(200) 00:15:25.845 fused_ordering(201) 00:15:25.845 fused_ordering(202) 00:15:25.845 fused_ordering(203) 00:15:25.845 fused_ordering(204) 00:15:25.845 fused_ordering(205) 00:15:26.109 fused_ordering(206) 00:15:26.109 fused_ordering(207) 00:15:26.109 fused_ordering(208) 00:15:26.109 fused_ordering(209) 00:15:26.109 fused_ordering(210) 00:15:26.109 fused_ordering(211) 00:15:26.109 fused_ordering(212) 00:15:26.109 fused_ordering(213) 00:15:26.109 fused_ordering(214) 00:15:26.109 fused_ordering(215) 00:15:26.109 fused_ordering(216) 00:15:26.109 fused_ordering(217) 00:15:26.109 fused_ordering(218) 00:15:26.109 fused_ordering(219) 00:15:26.109 fused_ordering(220) 00:15:26.109 fused_ordering(221) 00:15:26.109 fused_ordering(222) 00:15:26.109 fused_ordering(223) 00:15:26.109 fused_ordering(224) 00:15:26.109 fused_ordering(225) 00:15:26.109 fused_ordering(226) 00:15:26.109 fused_ordering(227) 00:15:26.109 fused_ordering(228) 00:15:26.109 fused_ordering(229) 00:15:26.109 fused_ordering(230) 00:15:26.109 fused_ordering(231) 00:15:26.109 fused_ordering(232) 00:15:26.109 fused_ordering(233) 00:15:26.109 fused_ordering(234) 00:15:26.109 fused_ordering(235) 00:15:26.109 fused_ordering(236) 00:15:26.109 fused_ordering(237) 00:15:26.109 fused_ordering(238) 00:15:26.109 fused_ordering(239) 00:15:26.109 fused_ordering(240) 00:15:26.109 fused_ordering(241) 00:15:26.109 fused_ordering(242) 00:15:26.109 fused_ordering(243) 00:15:26.109 fused_ordering(244) 00:15:26.109 fused_ordering(245) 00:15:26.109 fused_ordering(246) 00:15:26.109 fused_ordering(247) 00:15:26.109 fused_ordering(248) 00:15:26.109 fused_ordering(249) 00:15:26.109 fused_ordering(250) 00:15:26.109 fused_ordering(251) 00:15:26.109 fused_ordering(252) 00:15:26.109 fused_ordering(253) 00:15:26.109 fused_ordering(254) 00:15:26.109 fused_ordering(255) 00:15:26.109 fused_ordering(256) 00:15:26.109 fused_ordering(257) 00:15:26.109 fused_ordering(258) 00:15:26.109 fused_ordering(259) 00:15:26.109 fused_ordering(260) 00:15:26.109 fused_ordering(261) 00:15:26.109 fused_ordering(262) 00:15:26.109 fused_ordering(263) 00:15:26.109 fused_ordering(264) 00:15:26.109 fused_ordering(265) 00:15:26.109 fused_ordering(266) 00:15:26.109 fused_ordering(267) 00:15:26.109 fused_ordering(268) 00:15:26.109 fused_ordering(269) 00:15:26.109 fused_ordering(270) 00:15:26.109 fused_ordering(271) 00:15:26.109 fused_ordering(272) 00:15:26.109 fused_ordering(273) 00:15:26.109 fused_ordering(274) 00:15:26.109 fused_ordering(275) 00:15:26.109 fused_ordering(276) 00:15:26.109 fused_ordering(277) 00:15:26.109 fused_ordering(278) 00:15:26.109 fused_ordering(279) 00:15:26.109 fused_ordering(280) 00:15:26.109 fused_ordering(281) 00:15:26.109 fused_ordering(282) 00:15:26.109 fused_ordering(283) 00:15:26.109 fused_ordering(284) 00:15:26.109 fused_ordering(285) 00:15:26.109 fused_ordering(286) 00:15:26.109 fused_ordering(287) 00:15:26.109 fused_ordering(288) 00:15:26.109 fused_ordering(289) 00:15:26.109 fused_ordering(290) 00:15:26.109 fused_ordering(291) 00:15:26.109 fused_ordering(292) 00:15:26.109 fused_ordering(293) 00:15:26.109 fused_ordering(294) 00:15:26.109 fused_ordering(295) 00:15:26.109 fused_ordering(296) 00:15:26.109 fused_ordering(297) 00:15:26.109 fused_ordering(298) 00:15:26.109 fused_ordering(299) 00:15:26.109 fused_ordering(300) 00:15:26.109 fused_ordering(301) 00:15:26.109 fused_ordering(302) 00:15:26.109 fused_ordering(303) 00:15:26.109 fused_ordering(304) 00:15:26.109 fused_ordering(305) 00:15:26.109 fused_ordering(306) 00:15:26.109 fused_ordering(307) 00:15:26.109 fused_ordering(308) 00:15:26.109 fused_ordering(309) 00:15:26.109 fused_ordering(310) 00:15:26.109 fused_ordering(311) 00:15:26.109 fused_ordering(312) 00:15:26.109 fused_ordering(313) 00:15:26.109 fused_ordering(314) 00:15:26.109 fused_ordering(315) 00:15:26.109 fused_ordering(316) 00:15:26.109 fused_ordering(317) 00:15:26.109 fused_ordering(318) 00:15:26.109 fused_ordering(319) 00:15:26.109 fused_ordering(320) 00:15:26.109 fused_ordering(321) 00:15:26.109 fused_ordering(322) 00:15:26.109 fused_ordering(323) 00:15:26.109 fused_ordering(324) 00:15:26.109 fused_ordering(325) 00:15:26.109 fused_ordering(326) 00:15:26.109 fused_ordering(327) 00:15:26.109 fused_ordering(328) 00:15:26.109 fused_ordering(329) 00:15:26.109 fused_ordering(330) 00:15:26.109 fused_ordering(331) 00:15:26.109 fused_ordering(332) 00:15:26.109 fused_ordering(333) 00:15:26.109 fused_ordering(334) 00:15:26.109 fused_ordering(335) 00:15:26.109 fused_ordering(336) 00:15:26.109 fused_ordering(337) 00:15:26.109 fused_ordering(338) 00:15:26.109 fused_ordering(339) 00:15:26.109 fused_ordering(340) 00:15:26.109 fused_ordering(341) 00:15:26.109 fused_ordering(342) 00:15:26.109 fused_ordering(343) 00:15:26.109 fused_ordering(344) 00:15:26.109 fused_ordering(345) 00:15:26.109 fused_ordering(346) 00:15:26.109 fused_ordering(347) 00:15:26.109 fused_ordering(348) 00:15:26.109 fused_ordering(349) 00:15:26.109 fused_ordering(350) 00:15:26.109 fused_ordering(351) 00:15:26.109 fused_ordering(352) 00:15:26.109 fused_ordering(353) 00:15:26.109 fused_ordering(354) 00:15:26.109 fused_ordering(355) 00:15:26.109 fused_ordering(356) 00:15:26.109 fused_ordering(357) 00:15:26.109 fused_ordering(358) 00:15:26.109 fused_ordering(359) 00:15:26.109 fused_ordering(360) 00:15:26.109 fused_ordering(361) 00:15:26.109 fused_ordering(362) 00:15:26.109 fused_ordering(363) 00:15:26.109 fused_ordering(364) 00:15:26.109 fused_ordering(365) 00:15:26.109 fused_ordering(366) 00:15:26.109 fused_ordering(367) 00:15:26.109 fused_ordering(368) 00:15:26.109 fused_ordering(369) 00:15:26.109 fused_ordering(370) 00:15:26.109 fused_ordering(371) 00:15:26.109 fused_ordering(372) 00:15:26.109 fused_ordering(373) 00:15:26.109 fused_ordering(374) 00:15:26.109 fused_ordering(375) 00:15:26.109 fused_ordering(376) 00:15:26.109 fused_ordering(377) 00:15:26.109 fused_ordering(378) 00:15:26.109 fused_ordering(379) 00:15:26.109 fused_ordering(380) 00:15:26.109 fused_ordering(381) 00:15:26.109 fused_ordering(382) 00:15:26.109 fused_ordering(383) 00:15:26.109 fused_ordering(384) 00:15:26.109 fused_ordering(385) 00:15:26.109 fused_ordering(386) 00:15:26.109 fused_ordering(387) 00:15:26.109 fused_ordering(388) 00:15:26.109 fused_ordering(389) 00:15:26.109 fused_ordering(390) 00:15:26.109 fused_ordering(391) 00:15:26.109 fused_ordering(392) 00:15:26.109 fused_ordering(393) 00:15:26.109 fused_ordering(394) 00:15:26.109 fused_ordering(395) 00:15:26.109 fused_ordering(396) 00:15:26.109 fused_ordering(397) 00:15:26.109 fused_ordering(398) 00:15:26.109 fused_ordering(399) 00:15:26.109 fused_ordering(400) 00:15:26.109 fused_ordering(401) 00:15:26.109 fused_ordering(402) 00:15:26.109 fused_ordering(403) 00:15:26.109 fused_ordering(404) 00:15:26.109 fused_ordering(405) 00:15:26.109 fused_ordering(406) 00:15:26.109 fused_ordering(407) 00:15:26.109 fused_ordering(408) 00:15:26.109 fused_ordering(409) 00:15:26.109 fused_ordering(410) 00:15:26.736 fused_ordering(411) 00:15:26.736 fused_ordering(412) 00:15:26.736 fused_ordering(413) 00:15:26.736 fused_ordering(414) 00:15:26.736 fused_ordering(415) 00:15:26.736 fused_ordering(416) 00:15:26.736 fused_ordering(417) 00:15:26.736 fused_ordering(418) 00:15:26.736 fused_ordering(419) 00:15:26.736 fused_ordering(420) 00:15:26.736 fused_ordering(421) 00:15:26.736 fused_ordering(422) 00:15:26.736 fused_ordering(423) 00:15:26.736 fused_ordering(424) 00:15:26.736 fused_ordering(425) 00:15:26.736 fused_ordering(426) 00:15:26.736 fused_ordering(427) 00:15:26.736 fused_ordering(428) 00:15:26.736 fused_ordering(429) 00:15:26.736 fused_ordering(430) 00:15:26.736 fused_ordering(431) 00:15:26.736 fused_ordering(432) 00:15:26.736 fused_ordering(433) 00:15:26.736 fused_ordering(434) 00:15:26.736 fused_ordering(435) 00:15:26.736 fused_ordering(436) 00:15:26.736 fused_ordering(437) 00:15:26.736 fused_ordering(438) 00:15:26.736 fused_ordering(439) 00:15:26.736 fused_ordering(440) 00:15:26.736 fused_ordering(441) 00:15:26.736 fused_ordering(442) 00:15:26.736 fused_ordering(443) 00:15:26.736 fused_ordering(444) 00:15:26.736 fused_ordering(445) 00:15:26.736 fused_ordering(446) 00:15:26.736 fused_ordering(447) 00:15:26.736 fused_ordering(448) 00:15:26.736 fused_ordering(449) 00:15:26.736 fused_ordering(450) 00:15:26.736 fused_ordering(451) 00:15:26.736 fused_ordering(452) 00:15:26.736 fused_ordering(453) 00:15:26.736 fused_ordering(454) 00:15:26.736 fused_ordering(455) 00:15:26.736 fused_ordering(456) 00:15:26.736 fused_ordering(457) 00:15:26.736 fused_ordering(458) 00:15:26.736 fused_ordering(459) 00:15:26.736 fused_ordering(460) 00:15:26.736 fused_ordering(461) 00:15:26.736 fused_ordering(462) 00:15:26.736 fused_ordering(463) 00:15:26.736 fused_ordering(464) 00:15:26.736 fused_ordering(465) 00:15:26.736 fused_ordering(466) 00:15:26.736 fused_ordering(467) 00:15:26.736 fused_ordering(468) 00:15:26.736 fused_ordering(469) 00:15:26.736 fused_ordering(470) 00:15:26.736 fused_ordering(471) 00:15:26.736 fused_ordering(472) 00:15:26.736 fused_ordering(473) 00:15:26.736 fused_ordering(474) 00:15:26.736 fused_ordering(475) 00:15:26.736 fused_ordering(476) 00:15:26.736 fused_ordering(477) 00:15:26.736 fused_ordering(478) 00:15:26.736 fused_ordering(479) 00:15:26.736 fused_ordering(480) 00:15:26.736 fused_ordering(481) 00:15:26.736 fused_ordering(482) 00:15:26.736 fused_ordering(483) 00:15:26.736 fused_ordering(484) 00:15:26.736 fused_ordering(485) 00:15:26.736 fused_ordering(486) 00:15:26.736 fused_ordering(487) 00:15:26.736 fused_ordering(488) 00:15:26.736 fused_ordering(489) 00:15:26.736 fused_ordering(490) 00:15:26.736 fused_ordering(491) 00:15:26.736 fused_ordering(492) 00:15:26.736 fused_ordering(493) 00:15:26.736 fused_ordering(494) 00:15:26.736 fused_ordering(495) 00:15:26.736 fused_ordering(496) 00:15:26.736 fused_ordering(497) 00:15:26.736 fused_ordering(498) 00:15:26.736 fused_ordering(499) 00:15:26.736 fused_ordering(500) 00:15:26.736 fused_ordering(501) 00:15:26.736 fused_ordering(502) 00:15:26.736 fused_ordering(503) 00:15:26.736 fused_ordering(504) 00:15:26.736 fused_ordering(505) 00:15:26.736 fused_ordering(506) 00:15:26.736 fused_ordering(507) 00:15:26.736 fused_ordering(508) 00:15:26.736 fused_ordering(509) 00:15:26.736 fused_ordering(510) 00:15:26.736 fused_ordering(511) 00:15:26.736 fused_ordering(512) 00:15:26.736 fused_ordering(513) 00:15:26.736 fused_ordering(514) 00:15:26.736 fused_ordering(515) 00:15:26.736 fused_ordering(516) 00:15:26.736 fused_ordering(517) 00:15:26.736 fused_ordering(518) 00:15:26.736 fused_ordering(519) 00:15:26.736 fused_ordering(520) 00:15:26.736 fused_ordering(521) 00:15:26.736 fused_ordering(522) 00:15:26.736 fused_ordering(523) 00:15:26.736 fused_ordering(524) 00:15:26.736 fused_ordering(525) 00:15:26.736 fused_ordering(526) 00:15:26.736 fused_ordering(527) 00:15:26.736 fused_ordering(528) 00:15:26.737 fused_ordering(529) 00:15:26.737 fused_ordering(530) 00:15:26.737 fused_ordering(531) 00:15:26.737 fused_ordering(532) 00:15:26.737 fused_ordering(533) 00:15:26.737 fused_ordering(534) 00:15:26.737 fused_ordering(535) 00:15:26.737 fused_ordering(536) 00:15:26.737 fused_ordering(537) 00:15:26.737 fused_ordering(538) 00:15:26.737 fused_ordering(539) 00:15:26.737 fused_ordering(540) 00:15:26.737 fused_ordering(541) 00:15:26.737 fused_ordering(542) 00:15:26.737 fused_ordering(543) 00:15:26.737 fused_ordering(544) 00:15:26.737 fused_ordering(545) 00:15:26.737 fused_ordering(546) 00:15:26.737 fused_ordering(547) 00:15:26.737 fused_ordering(548) 00:15:26.737 fused_ordering(549) 00:15:26.737 fused_ordering(550) 00:15:26.737 fused_ordering(551) 00:15:26.737 fused_ordering(552) 00:15:26.737 fused_ordering(553) 00:15:26.737 fused_ordering(554) 00:15:26.737 fused_ordering(555) 00:15:26.737 fused_ordering(556) 00:15:26.737 fused_ordering(557) 00:15:26.737 fused_ordering(558) 00:15:26.737 fused_ordering(559) 00:15:26.737 fused_ordering(560) 00:15:26.737 fused_ordering(561) 00:15:26.737 fused_ordering(562) 00:15:26.737 fused_ordering(563) 00:15:26.737 fused_ordering(564) 00:15:26.737 fused_ordering(565) 00:15:26.737 fused_ordering(566) 00:15:26.737 fused_ordering(567) 00:15:26.737 fused_ordering(568) 00:15:26.737 fused_ordering(569) 00:15:26.737 fused_ordering(570) 00:15:26.737 fused_ordering(571) 00:15:26.737 fused_ordering(572) 00:15:26.737 fused_ordering(573) 00:15:26.737 fused_ordering(574) 00:15:26.737 fused_ordering(575) 00:15:26.737 fused_ordering(576) 00:15:26.737 fused_ordering(577) 00:15:26.737 fused_ordering(578) 00:15:26.737 fused_ordering(579) 00:15:26.737 fused_ordering(580) 00:15:26.737 fused_ordering(581) 00:15:26.737 fused_ordering(582) 00:15:26.737 fused_ordering(583) 00:15:26.737 fused_ordering(584) 00:15:26.737 fused_ordering(585) 00:15:26.737 fused_ordering(586) 00:15:26.737 fused_ordering(587) 00:15:26.737 fused_ordering(588) 00:15:26.737 fused_ordering(589) 00:15:26.737 fused_ordering(590) 00:15:26.737 fused_ordering(591) 00:15:26.737 fused_ordering(592) 00:15:26.737 fused_ordering(593) 00:15:26.737 fused_ordering(594) 00:15:26.737 fused_ordering(595) 00:15:26.737 fused_ordering(596) 00:15:26.737 fused_ordering(597) 00:15:26.737 fused_ordering(598) 00:15:26.737 fused_ordering(599) 00:15:26.737 fused_ordering(600) 00:15:26.737 fused_ordering(601) 00:15:26.737 fused_ordering(602) 00:15:26.737 fused_ordering(603) 00:15:26.737 fused_ordering(604) 00:15:26.737 fused_ordering(605) 00:15:26.737 fused_ordering(606) 00:15:26.737 fused_ordering(607) 00:15:26.737 fused_ordering(608) 00:15:26.737 fused_ordering(609) 00:15:26.737 fused_ordering(610) 00:15:26.737 fused_ordering(611) 00:15:26.737 fused_ordering(612) 00:15:26.737 fused_ordering(613) 00:15:26.737 fused_ordering(614) 00:15:26.737 fused_ordering(615) 00:15:27.309 fused_ordering(616) 00:15:27.309 fused_ordering(617) 00:15:27.309 fused_ordering(618) 00:15:27.309 fused_ordering(619) 00:15:27.309 fused_ordering(620) 00:15:27.309 fused_ordering(621) 00:15:27.309 fused_ordering(622) 00:15:27.309 fused_ordering(623) 00:15:27.309 fused_ordering(624) 00:15:27.309 fused_ordering(625) 00:15:27.309 fused_ordering(626) 00:15:27.309 fused_ordering(627) 00:15:27.309 fused_ordering(628) 00:15:27.309 fused_ordering(629) 00:15:27.309 fused_ordering(630) 00:15:27.309 fused_ordering(631) 00:15:27.309 fused_ordering(632) 00:15:27.309 fused_ordering(633) 00:15:27.309 fused_ordering(634) 00:15:27.309 fused_ordering(635) 00:15:27.309 fused_ordering(636) 00:15:27.309 fused_ordering(637) 00:15:27.309 fused_ordering(638) 00:15:27.309 fused_ordering(639) 00:15:27.309 fused_ordering(640) 00:15:27.309 fused_ordering(641) 00:15:27.309 fused_ordering(642) 00:15:27.309 fused_ordering(643) 00:15:27.309 fused_ordering(644) 00:15:27.309 fused_ordering(645) 00:15:27.309 fused_ordering(646) 00:15:27.309 fused_ordering(647) 00:15:27.309 fused_ordering(648) 00:15:27.309 fused_ordering(649) 00:15:27.309 fused_ordering(650) 00:15:27.309 fused_ordering(651) 00:15:27.309 fused_ordering(652) 00:15:27.309 fused_ordering(653) 00:15:27.309 fused_ordering(654) 00:15:27.309 fused_ordering(655) 00:15:27.309 fused_ordering(656) 00:15:27.309 fused_ordering(657) 00:15:27.309 fused_ordering(658) 00:15:27.309 fused_ordering(659) 00:15:27.309 fused_ordering(660) 00:15:27.309 fused_ordering(661) 00:15:27.309 fused_ordering(662) 00:15:27.309 fused_ordering(663) 00:15:27.309 fused_ordering(664) 00:15:27.309 fused_ordering(665) 00:15:27.309 fused_ordering(666) 00:15:27.309 fused_ordering(667) 00:15:27.309 fused_ordering(668) 00:15:27.309 fused_ordering(669) 00:15:27.309 fused_ordering(670) 00:15:27.309 fused_ordering(671) 00:15:27.309 fused_ordering(672) 00:15:27.309 fused_ordering(673) 00:15:27.309 fused_ordering(674) 00:15:27.309 fused_ordering(675) 00:15:27.309 fused_ordering(676) 00:15:27.309 fused_ordering(677) 00:15:27.309 fused_ordering(678) 00:15:27.309 fused_ordering(679) 00:15:27.309 fused_ordering(680) 00:15:27.309 fused_ordering(681) 00:15:27.309 fused_ordering(682) 00:15:27.309 fused_ordering(683) 00:15:27.309 fused_ordering(684) 00:15:27.309 fused_ordering(685) 00:15:27.309 fused_ordering(686) 00:15:27.309 fused_ordering(687) 00:15:27.309 fused_ordering(688) 00:15:27.309 fused_ordering(689) 00:15:27.309 fused_ordering(690) 00:15:27.309 fused_ordering(691) 00:15:27.309 fused_ordering(692) 00:15:27.309 fused_ordering(693) 00:15:27.309 fused_ordering(694) 00:15:27.309 fused_ordering(695) 00:15:27.309 fused_ordering(696) 00:15:27.309 fused_ordering(697) 00:15:27.309 fused_ordering(698) 00:15:27.309 fused_ordering(699) 00:15:27.309 fused_ordering(700) 00:15:27.309 fused_ordering(701) 00:15:27.309 fused_ordering(702) 00:15:27.309 fused_ordering(703) 00:15:27.309 fused_ordering(704) 00:15:27.309 fused_ordering(705) 00:15:27.309 fused_ordering(706) 00:15:27.309 fused_ordering(707) 00:15:27.309 fused_ordering(708) 00:15:27.309 fused_ordering(709) 00:15:27.309 fused_ordering(710) 00:15:27.309 fused_ordering(711) 00:15:27.309 fused_ordering(712) 00:15:27.309 fused_ordering(713) 00:15:27.309 fused_ordering(714) 00:15:27.309 fused_ordering(715) 00:15:27.309 fused_ordering(716) 00:15:27.309 fused_ordering(717) 00:15:27.309 fused_ordering(718) 00:15:27.309 fused_ordering(719) 00:15:27.309 fused_ordering(720) 00:15:27.309 fused_ordering(721) 00:15:27.309 fused_ordering(722) 00:15:27.309 fused_ordering(723) 00:15:27.309 fused_ordering(724) 00:15:27.309 fused_ordering(725) 00:15:27.309 fused_ordering(726) 00:15:27.309 fused_ordering(727) 00:15:27.309 fused_ordering(728) 00:15:27.309 fused_ordering(729) 00:15:27.309 fused_ordering(730) 00:15:27.309 fused_ordering(731) 00:15:27.309 fused_ordering(732) 00:15:27.309 fused_ordering(733) 00:15:27.309 fused_ordering(734) 00:15:27.309 fused_ordering(735) 00:15:27.309 fused_ordering(736) 00:15:27.309 fused_ordering(737) 00:15:27.309 fused_ordering(738) 00:15:27.309 fused_ordering(739) 00:15:27.309 fused_ordering(740) 00:15:27.309 fused_ordering(741) 00:15:27.309 fused_ordering(742) 00:15:27.309 fused_ordering(743) 00:15:27.309 fused_ordering(744) 00:15:27.309 fused_ordering(745) 00:15:27.309 fused_ordering(746) 00:15:27.309 fused_ordering(747) 00:15:27.309 fused_ordering(748) 00:15:27.309 fused_ordering(749) 00:15:27.309 fused_ordering(750) 00:15:27.309 fused_ordering(751) 00:15:27.309 fused_ordering(752) 00:15:27.309 fused_ordering(753) 00:15:27.309 fused_ordering(754) 00:15:27.309 fused_ordering(755) 00:15:27.309 fused_ordering(756) 00:15:27.309 fused_ordering(757) 00:15:27.309 fused_ordering(758) 00:15:27.309 fused_ordering(759) 00:15:27.309 fused_ordering(760) 00:15:27.309 fused_ordering(761) 00:15:27.309 fused_ordering(762) 00:15:27.309 fused_ordering(763) 00:15:27.309 fused_ordering(764) 00:15:27.309 fused_ordering(765) 00:15:27.309 fused_ordering(766) 00:15:27.309 fused_ordering(767) 00:15:27.309 fused_ordering(768) 00:15:27.309 fused_ordering(769) 00:15:27.309 fused_ordering(770) 00:15:27.309 fused_ordering(771) 00:15:27.309 fused_ordering(772) 00:15:27.309 fused_ordering(773) 00:15:27.309 fused_ordering(774) 00:15:27.309 fused_ordering(775) 00:15:27.309 fused_ordering(776) 00:15:27.309 fused_ordering(777) 00:15:27.309 fused_ordering(778) 00:15:27.309 fused_ordering(779) 00:15:27.309 fused_ordering(780) 00:15:27.309 fused_ordering(781) 00:15:27.309 fused_ordering(782) 00:15:27.309 fused_ordering(783) 00:15:27.309 fused_ordering(784) 00:15:27.309 fused_ordering(785) 00:15:27.309 fused_ordering(786) 00:15:27.309 fused_ordering(787) 00:15:27.309 fused_ordering(788) 00:15:27.309 fused_ordering(789) 00:15:27.309 fused_ordering(790) 00:15:27.309 fused_ordering(791) 00:15:27.309 fused_ordering(792) 00:15:27.309 fused_ordering(793) 00:15:27.309 fused_ordering(794) 00:15:27.309 fused_ordering(795) 00:15:27.309 fused_ordering(796) 00:15:27.309 fused_ordering(797) 00:15:27.309 fused_ordering(798) 00:15:27.309 fused_ordering(799) 00:15:27.309 fused_ordering(800) 00:15:27.309 fused_ordering(801) 00:15:27.309 fused_ordering(802) 00:15:27.309 fused_ordering(803) 00:15:27.309 fused_ordering(804) 00:15:27.309 fused_ordering(805) 00:15:27.309 fused_ordering(806) 00:15:27.309 fused_ordering(807) 00:15:27.309 fused_ordering(808) 00:15:27.309 fused_ordering(809) 00:15:27.309 fused_ordering(810) 00:15:27.309 fused_ordering(811) 00:15:27.309 fused_ordering(812) 00:15:27.309 fused_ordering(813) 00:15:27.309 fused_ordering(814) 00:15:27.309 fused_ordering(815) 00:15:27.309 fused_ordering(816) 00:15:27.309 fused_ordering(817) 00:15:27.309 fused_ordering(818) 00:15:27.309 fused_ordering(819) 00:15:27.309 fused_ordering(820) 00:15:27.881 fused_ordering(821) 00:15:27.881 fused_ordering(822) 00:15:27.881 fused_ordering(823) 00:15:27.881 fused_ordering(824) 00:15:27.881 fused_ordering(825) 00:15:27.881 fused_ordering(826) 00:15:27.881 fused_ordering(827) 00:15:27.881 fused_ordering(828) 00:15:27.881 fused_ordering(829) 00:15:27.881 fused_ordering(830) 00:15:27.881 fused_ordering(831) 00:15:27.881 fused_ordering(832) 00:15:27.881 fused_ordering(833) 00:15:27.881 fused_ordering(834) 00:15:27.881 fused_ordering(835) 00:15:27.881 fused_ordering(836) 00:15:27.881 fused_ordering(837) 00:15:27.881 fused_ordering(838) 00:15:27.881 fused_ordering(839) 00:15:27.881 fused_ordering(840) 00:15:27.881 fused_ordering(841) 00:15:27.881 fused_ordering(842) 00:15:27.881 fused_ordering(843) 00:15:27.881 fused_ordering(844) 00:15:27.881 fused_ordering(845) 00:15:27.882 fused_ordering(846) 00:15:27.882 fused_ordering(847) 00:15:27.882 fused_ordering(848) 00:15:27.882 fused_ordering(849) 00:15:27.882 fused_ordering(850) 00:15:27.882 fused_ordering(851) 00:15:27.882 fused_ordering(852) 00:15:27.882 fused_ordering(853) 00:15:27.882 fused_ordering(854) 00:15:27.882 fused_ordering(855) 00:15:27.882 fused_ordering(856) 00:15:27.882 fused_ordering(857) 00:15:27.882 fused_ordering(858) 00:15:27.882 fused_ordering(859) 00:15:27.882 fused_ordering(860) 00:15:27.882 fused_ordering(861) 00:15:27.882 fused_ordering(862) 00:15:27.882 fused_ordering(863) 00:15:27.882 fused_ordering(864) 00:15:27.882 fused_ordering(865) 00:15:27.882 fused_ordering(866) 00:15:27.882 fused_ordering(867) 00:15:27.882 fused_ordering(868) 00:15:27.882 fused_ordering(869) 00:15:27.882 fused_ordering(870) 00:15:27.882 fused_ordering(871) 00:15:27.882 fused_ordering(872) 00:15:27.882 fused_ordering(873) 00:15:27.882 fused_ordering(874) 00:15:27.882 fused_ordering(875) 00:15:27.882 fused_ordering(876) 00:15:27.882 fused_ordering(877) 00:15:27.882 fused_ordering(878) 00:15:27.882 fused_ordering(879) 00:15:27.882 fused_ordering(880) 00:15:27.882 fused_ordering(881) 00:15:27.882 fused_ordering(882) 00:15:27.882 fused_ordering(883) 00:15:27.882 fused_ordering(884) 00:15:27.882 fused_ordering(885) 00:15:27.882 fused_ordering(886) 00:15:27.882 fused_ordering(887) 00:15:27.882 fused_ordering(888) 00:15:27.882 fused_ordering(889) 00:15:27.882 fused_ordering(890) 00:15:27.882 fused_ordering(891) 00:15:27.882 fused_ordering(892) 00:15:27.882 fused_ordering(893) 00:15:27.882 fused_ordering(894) 00:15:27.882 fused_ordering(895) 00:15:27.882 fused_ordering(896) 00:15:27.882 fused_ordering(897) 00:15:27.882 fused_ordering(898) 00:15:27.882 fused_ordering(899) 00:15:27.882 fused_ordering(900) 00:15:27.882 fused_ordering(901) 00:15:27.882 fused_ordering(902) 00:15:27.882 fused_ordering(903) 00:15:27.882 fused_ordering(904) 00:15:27.882 fused_ordering(905) 00:15:27.882 fused_ordering(906) 00:15:27.882 fused_ordering(907) 00:15:27.882 fused_ordering(908) 00:15:27.882 fused_ordering(909) 00:15:27.882 fused_ordering(910) 00:15:27.882 fused_ordering(911) 00:15:27.882 fused_ordering(912) 00:15:27.882 fused_ordering(913) 00:15:27.882 fused_ordering(914) 00:15:27.882 fused_ordering(915) 00:15:27.882 fused_ordering(916) 00:15:27.882 fused_ordering(917) 00:15:27.882 fused_ordering(918) 00:15:27.882 fused_ordering(919) 00:15:27.882 fused_ordering(920) 00:15:27.882 fused_ordering(921) 00:15:27.882 fused_ordering(922) 00:15:27.882 fused_ordering(923) 00:15:27.882 fused_ordering(924) 00:15:27.882 fused_ordering(925) 00:15:27.882 fused_ordering(926) 00:15:27.882 fused_ordering(927) 00:15:27.882 fused_ordering(928) 00:15:27.882 fused_ordering(929) 00:15:27.882 fused_ordering(930) 00:15:27.882 fused_ordering(931) 00:15:27.882 fused_ordering(932) 00:15:27.882 fused_ordering(933) 00:15:27.882 fused_ordering(934) 00:15:27.882 fused_ordering(935) 00:15:27.882 fused_ordering(936) 00:15:27.882 fused_ordering(937) 00:15:27.882 fused_ordering(938) 00:15:27.882 fused_ordering(939) 00:15:27.882 fused_ordering(940) 00:15:27.882 fused_ordering(941) 00:15:27.882 fused_ordering(942) 00:15:27.882 fused_ordering(943) 00:15:27.882 fused_ordering(944) 00:15:27.882 fused_ordering(945) 00:15:27.882 fused_ordering(946) 00:15:27.882 fused_ordering(947) 00:15:27.882 fused_ordering(948) 00:15:27.882 fused_ordering(949) 00:15:27.882 fused_ordering(950) 00:15:27.882 fused_ordering(951) 00:15:27.882 fused_ordering(952) 00:15:27.882 fused_ordering(953) 00:15:27.882 fused_ordering(954) 00:15:27.882 fused_ordering(955) 00:15:27.882 fused_ordering(956) 00:15:27.882 fused_ordering(957) 00:15:27.882 fused_ordering(958) 00:15:27.882 fused_ordering(959) 00:15:27.882 fused_ordering(960) 00:15:27.882 fused_ordering(961) 00:15:27.882 fused_ordering(962) 00:15:27.882 fused_ordering(963) 00:15:27.882 fused_ordering(964) 00:15:27.882 fused_ordering(965) 00:15:27.882 fused_ordering(966) 00:15:27.882 fused_ordering(967) 00:15:27.882 fused_ordering(968) 00:15:27.882 fused_ordering(969) 00:15:27.882 fused_ordering(970) 00:15:27.882 fused_ordering(971) 00:15:27.882 fused_ordering(972) 00:15:27.882 fused_ordering(973) 00:15:27.882 fused_ordering(974) 00:15:27.882 fused_ordering(975) 00:15:27.882 fused_ordering(976) 00:15:27.882 fused_ordering(977) 00:15:27.882 fused_ordering(978) 00:15:27.882 fused_ordering(979) 00:15:27.882 fused_ordering(980) 00:15:27.882 fused_ordering(981) 00:15:27.882 fused_ordering(982) 00:15:27.882 fused_ordering(983) 00:15:27.882 fused_ordering(984) 00:15:27.882 fused_ordering(985) 00:15:27.882 fused_ordering(986) 00:15:27.882 fused_ordering(987) 00:15:27.882 fused_ordering(988) 00:15:27.882 fused_ordering(989) 00:15:27.882 fused_ordering(990) 00:15:27.882 fused_ordering(991) 00:15:27.882 fused_ordering(992) 00:15:27.882 fused_ordering(993) 00:15:27.882 fused_ordering(994) 00:15:27.882 fused_ordering(995) 00:15:27.882 fused_ordering(996) 00:15:27.882 fused_ordering(997) 00:15:27.882 fused_ordering(998) 00:15:27.882 fused_ordering(999) 00:15:27.882 fused_ordering(1000) 00:15:27.882 fused_ordering(1001) 00:15:27.882 fused_ordering(1002) 00:15:27.882 fused_ordering(1003) 00:15:27.882 fused_ordering(1004) 00:15:27.882 fused_ordering(1005) 00:15:27.882 fused_ordering(1006) 00:15:27.882 fused_ordering(1007) 00:15:27.882 fused_ordering(1008) 00:15:27.882 fused_ordering(1009) 00:15:27.882 fused_ordering(1010) 00:15:27.882 fused_ordering(1011) 00:15:27.882 fused_ordering(1012) 00:15:27.882 fused_ordering(1013) 00:15:27.882 fused_ordering(1014) 00:15:27.882 fused_ordering(1015) 00:15:27.882 fused_ordering(1016) 00:15:27.882 fused_ordering(1017) 00:15:27.882 fused_ordering(1018) 00:15:27.882 fused_ordering(1019) 00:15:27.882 fused_ordering(1020) 00:15:27.882 fused_ordering(1021) 00:15:27.882 fused_ordering(1022) 00:15:27.882 fused_ordering(1023) 00:15:27.882 22:42:12 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:27.882 22:42:12 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:27.882 22:42:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:27.882 22:42:12 -- nvmf/common.sh@116 -- # sync 00:15:27.882 22:42:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:27.882 22:42:12 -- nvmf/common.sh@119 -- # set +e 00:15:27.882 22:42:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:27.882 22:42:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:27.882 rmmod nvme_tcp 00:15:27.882 rmmod nvme_fabrics 00:15:27.882 rmmod nvme_keyring 00:15:27.882 22:42:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:27.882 22:42:12 -- nvmf/common.sh@123 -- # set -e 00:15:27.882 22:42:12 -- nvmf/common.sh@124 -- # return 0 00:15:27.882 22:42:12 -- nvmf/common.sh@477 -- # '[' -n 1048824 ']' 00:15:27.882 22:42:12 -- nvmf/common.sh@478 -- # killprocess 1048824 00:15:27.882 22:42:12 -- common/autotest_common.sh@926 -- # '[' -z 1048824 ']' 00:15:27.882 22:42:12 -- common/autotest_common.sh@930 -- # kill -0 1048824 00:15:27.882 22:42:12 -- common/autotest_common.sh@931 -- # uname 00:15:27.882 22:42:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:27.882 22:42:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1048824 00:15:27.882 22:42:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:27.882 22:42:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:27.882 22:42:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1048824' 00:15:27.882 killing process with pid 1048824 00:15:27.882 22:42:12 -- common/autotest_common.sh@945 -- # kill 1048824 00:15:27.882 22:42:12 -- common/autotest_common.sh@950 -- # wait 1048824 00:15:28.144 22:42:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:28.144 22:42:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:28.144 22:42:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:28.144 22:42:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:28.144 22:42:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:28.144 22:42:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.144 22:42:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.144 22:42:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.059 22:42:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:30.059 00:15:30.059 real 0m14.011s 00:15:30.059 user 0m7.370s 00:15:30.059 sys 0m7.509s 00:15:30.059 22:42:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:30.059 22:42:14 -- common/autotest_common.sh@10 -- # set +x 00:15:30.059 ************************************ 00:15:30.059 END TEST nvmf_fused_ordering 00:15:30.059 ************************************ 00:15:30.321 22:42:14 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:30.321 22:42:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:30.321 22:42:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:30.321 22:42:14 -- common/autotest_common.sh@10 -- # set +x 00:15:30.321 ************************************ 00:15:30.321 START TEST nvmf_delete_subsystem 00:15:30.321 ************************************ 00:15:30.321 22:42:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:30.321 * Looking for test storage... 00:15:30.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:30.321 22:42:14 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:30.321 22:42:14 -- nvmf/common.sh@7 -- # uname -s 00:15:30.321 22:42:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.321 22:42:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.321 22:42:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.321 22:42:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.321 22:42:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.321 22:42:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.321 22:42:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.321 22:42:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.321 22:42:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.321 22:42:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.321 22:42:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:30.321 22:42:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:30.321 22:42:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.321 22:42:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.321 22:42:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:30.321 22:42:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:30.321 22:42:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.321 22:42:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.321 22:42:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.321 22:42:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.321 22:42:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.321 22:42:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.321 22:42:15 -- paths/export.sh@5 -- # export PATH 00:15:30.321 22:42:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.321 22:42:15 -- nvmf/common.sh@46 -- # : 0 00:15:30.321 22:42:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:30.321 22:42:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:30.321 22:42:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:30.321 22:42:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.321 22:42:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.321 22:42:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:30.321 22:42:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:30.321 22:42:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:30.321 22:42:15 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:30.321 22:42:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:30.321 22:42:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.321 22:42:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:30.321 22:42:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:30.321 22:42:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:30.321 22:42:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.321 22:42:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.321 22:42:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.321 22:42:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:30.321 22:42:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:30.321 22:42:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:30.321 22:42:15 -- common/autotest_common.sh@10 -- # set +x 00:15:38.469 22:42:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:38.469 22:42:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:38.469 22:42:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:38.469 22:42:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:38.469 22:42:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:38.469 22:42:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:38.469 22:42:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:38.469 22:42:22 -- nvmf/common.sh@294 -- # net_devs=() 00:15:38.469 22:42:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:38.469 22:42:22 -- nvmf/common.sh@295 -- # e810=() 00:15:38.469 22:42:22 -- nvmf/common.sh@295 -- # local -ga e810 00:15:38.469 22:42:22 -- nvmf/common.sh@296 -- # x722=() 00:15:38.469 22:42:22 -- nvmf/common.sh@296 -- # local -ga x722 00:15:38.469 22:42:22 -- nvmf/common.sh@297 -- # mlx=() 00:15:38.469 22:42:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:38.469 22:42:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:38.469 22:42:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:38.469 22:42:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:38.469 22:42:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:38.469 22:42:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:38.469 22:42:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:38.469 22:42:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:38.469 22:42:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:38.469 22:42:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:38.469 22:42:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:38.469 22:42:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:38.469 22:42:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:38.469 22:42:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:38.469 22:42:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:38.469 22:42:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:38.469 22:42:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:38.469 22:42:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:38.469 22:42:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:38.469 22:42:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:38.469 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:38.469 22:42:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:38.469 22:42:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:38.469 22:42:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.469 22:42:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.469 22:42:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:38.469 22:42:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:38.469 22:42:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:38.469 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:38.469 22:42:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:38.469 22:42:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:38.469 22:42:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.469 22:42:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.469 22:42:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:38.469 22:42:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:38.469 22:42:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:38.469 22:42:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:38.469 22:42:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:38.469 22:42:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.469 22:42:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:38.469 22:42:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.469 22:42:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:38.469 Found net devices under 0000:31:00.0: cvl_0_0 00:15:38.469 22:42:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.469 22:42:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:38.469 22:42:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.469 22:42:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:38.469 22:42:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.469 22:42:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:38.469 Found net devices under 0000:31:00.1: cvl_0_1 00:15:38.469 22:42:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.469 22:42:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:38.469 22:42:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:38.469 22:42:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:38.469 22:42:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:38.469 22:42:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:38.469 22:42:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.469 22:42:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:38.469 22:42:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:38.469 22:42:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:38.469 22:42:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:38.469 22:42:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:38.469 22:42:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:38.469 22:42:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:38.469 22:42:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.469 22:42:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:38.469 22:42:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:38.469 22:42:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:38.469 22:42:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:38.469 22:42:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:38.469 22:42:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:38.469 22:42:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:38.469 22:42:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:38.469 22:42:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:38.469 22:42:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:38.469 22:42:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:38.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.748 ms 00:15:38.469 00:15:38.469 --- 10.0.0.2 ping statistics --- 00:15:38.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.469 rtt min/avg/max/mdev = 0.748/0.748/0.748/0.000 ms 00:15:38.469 22:42:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:38.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:15:38.469 00:15:38.469 --- 10.0.0.1 ping statistics --- 00:15:38.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.469 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:15:38.469 22:42:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.469 22:42:23 -- nvmf/common.sh@410 -- # return 0 00:15:38.469 22:42:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:38.469 22:42:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.469 22:42:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:38.469 22:42:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:38.469 22:42:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.469 22:42:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:38.469 22:42:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:38.469 22:42:23 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:38.469 22:42:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:38.469 22:42:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:38.469 22:42:23 -- common/autotest_common.sh@10 -- # set +x 00:15:38.469 22:42:23 -- nvmf/common.sh@469 -- # nvmfpid=1054231 00:15:38.469 22:42:23 -- nvmf/common.sh@470 -- # waitforlisten 1054231 00:15:38.469 22:42:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:38.469 22:42:23 -- common/autotest_common.sh@819 -- # '[' -z 1054231 ']' 00:15:38.469 22:42:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.469 22:42:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:38.469 22:42:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.469 22:42:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:38.469 22:42:23 -- common/autotest_common.sh@10 -- # set +x 00:15:38.469 [2024-04-15 22:42:23.169201] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:15:38.469 [2024-04-15 22:42:23.169265] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.469 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.469 [2024-04-15 22:42:23.242241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:38.730 [2024-04-15 22:42:23.305000] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:38.730 [2024-04-15 22:42:23.305123] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.730 [2024-04-15 22:42:23.305131] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.730 [2024-04-15 22:42:23.305141] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.730 [2024-04-15 22:42:23.305285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.730 [2024-04-15 22:42:23.305289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.302 22:42:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:39.302 22:42:23 -- common/autotest_common.sh@852 -- # return 0 00:15:39.302 22:42:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:39.302 22:42:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:39.302 22:42:23 -- common/autotest_common.sh@10 -- # set +x 00:15:39.302 22:42:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.302 22:42:23 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:39.302 22:42:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:39.302 22:42:23 -- common/autotest_common.sh@10 -- # set +x 00:15:39.302 [2024-04-15 22:42:23.956701] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:39.302 22:42:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:39.302 22:42:23 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:39.302 22:42:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:39.302 22:42:23 -- common/autotest_common.sh@10 -- # set +x 00:15:39.302 22:42:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:39.302 22:42:23 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:39.302 22:42:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:39.302 22:42:23 -- common/autotest_common.sh@10 -- # set +x 00:15:39.302 [2024-04-15 22:42:23.980877] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.302 22:42:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:39.302 22:42:23 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:39.302 22:42:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:39.302 22:42:23 -- common/autotest_common.sh@10 -- # set +x 00:15:39.302 NULL1 00:15:39.302 22:42:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:39.302 22:42:23 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:39.302 22:42:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:39.302 22:42:23 -- common/autotest_common.sh@10 -- # set +x 00:15:39.302 Delay0 00:15:39.302 22:42:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:39.302 22:42:24 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:39.302 22:42:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:39.302 22:42:24 -- common/autotest_common.sh@10 -- # set +x 00:15:39.302 22:42:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:39.302 22:42:24 -- target/delete_subsystem.sh@28 -- # perf_pid=1054262 00:15:39.302 22:42:24 -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:39.302 22:42:24 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:39.302 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.302 [2024-04-15 22:42:24.077548] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:41.848 22:42:26 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.848 22:42:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.848 22:42:26 -- common/autotest_common.sh@10 -- # set +x 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 [2024-04-15 22:42:26.121094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a8040 is same with the state(5) to be set 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Write completed with error (sct=0, sc=8) 00:15:41.848 starting I/O failed: -6 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.848 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 starting I/O failed: -6 00:15:41.849 Write completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 [2024-04-15 22:42:26.127256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff038000c00 is same with the state(5) to be set 00:15:41.849 Write completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Write completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Write completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Write completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Write completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Write completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Write completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Write completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Write completed with error (sct=0, sc=8) 00:15:41.849 Write completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Write completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Write completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Write completed with error (sct=0, sc=8) 00:15:41.849 Write completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Write completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Write completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:41.849 Read completed with error (sct=0, sc=8) 00:15:42.421 [2024-04-15 22:42:27.095085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a95e0 is same with the state(5) to be set 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 [2024-04-15 22:42:27.124669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199e910 is same with the state(5) to be set 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 [2024-04-15 22:42:27.124876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19888b0 is same with the state(5) to be set 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 [2024-04-15 22:42:27.128988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff03800c600 is same with the state(5) to be set 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Write completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 Read completed with error (sct=0, sc=8) 00:15:42.421 [2024-04-15 22:42:27.129064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff03800bf20 is same with the state(5) to be set 00:15:42.421 [2024-04-15 22:42:27.129532] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a95e0 (9): Bad file descriptor 00:15:42.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:42.421 Initializing NVMe Controllers 00:15:42.421 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:42.421 Controller IO queue size 128, less than required. 00:15:42.421 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:42.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:42.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:42.421 Initialization complete. Launching workers. 00:15:42.421 ======================================================== 00:15:42.421 Latency(us) 00:15:42.421 Device Information : IOPS MiB/s Average min max 00:15:42.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.91 0.08 911255.70 251.30 1006562.48 00:15:42.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.93 0.08 922945.48 269.60 1010935.75 00:15:42.421 ======================================================== 00:15:42.421 Total : 320.83 0.16 917009.83 251.30 1010935.75 00:15:42.421 00:15:42.421 22:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.421 22:42:27 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:42.421 22:42:27 -- target/delete_subsystem.sh@35 -- # kill -0 1054262 00:15:42.421 22:42:27 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:42.995 22:42:27 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:42.995 22:42:27 -- target/delete_subsystem.sh@35 -- # kill -0 1054262 00:15:42.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1054262) - No such process 00:15:42.995 22:42:27 -- target/delete_subsystem.sh@45 -- # NOT wait 1054262 00:15:42.995 22:42:27 -- common/autotest_common.sh@640 -- # local es=0 00:15:42.995 22:42:27 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 1054262 00:15:42.995 22:42:27 -- common/autotest_common.sh@628 -- # local arg=wait 00:15:42.995 22:42:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:42.995 22:42:27 -- common/autotest_common.sh@632 -- # type -t wait 00:15:42.995 22:42:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:42.995 22:42:27 -- common/autotest_common.sh@643 -- # wait 1054262 00:15:42.995 22:42:27 -- common/autotest_common.sh@643 -- # es=1 00:15:42.995 22:42:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:42.995 22:42:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:42.995 22:42:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:42.995 22:42:27 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:42.995 22:42:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.995 22:42:27 -- common/autotest_common.sh@10 -- # set +x 00:15:42.995 22:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.995 22:42:27 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:42.995 22:42:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.995 22:42:27 -- common/autotest_common.sh@10 -- # set +x 00:15:42.995 [2024-04-15 22:42:27.662258] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.995 22:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.995 22:42:27 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:42.995 22:42:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.995 22:42:27 -- common/autotest_common.sh@10 -- # set +x 00:15:42.995 22:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.995 22:42:27 -- target/delete_subsystem.sh@54 -- # perf_pid=1055023 00:15:42.995 22:42:27 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:42.995 22:42:27 -- target/delete_subsystem.sh@57 -- # kill -0 1055023 00:15:42.995 22:42:27 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:42.995 22:42:27 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:42.995 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.995 [2024-04-15 22:42:27.728552] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:43.569 22:42:28 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:43.569 22:42:28 -- target/delete_subsystem.sh@57 -- # kill -0 1055023 00:15:43.569 22:42:28 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:44.143 22:42:28 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:44.143 22:42:28 -- target/delete_subsystem.sh@57 -- # kill -0 1055023 00:15:44.143 22:42:28 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:44.404 22:42:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:44.404 22:42:29 -- target/delete_subsystem.sh@57 -- # kill -0 1055023 00:15:44.404 22:42:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:44.976 22:42:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:44.976 22:42:29 -- target/delete_subsystem.sh@57 -- # kill -0 1055023 00:15:44.977 22:42:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:45.548 22:42:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:45.548 22:42:30 -- target/delete_subsystem.sh@57 -- # kill -0 1055023 00:15:45.548 22:42:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:46.120 22:42:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:46.120 22:42:30 -- target/delete_subsystem.sh@57 -- # kill -0 1055023 00:15:46.120 22:42:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:46.120 Initializing NVMe Controllers 00:15:46.120 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:46.120 Controller IO queue size 128, less than required. 00:15:46.120 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:46.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:46.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:46.120 Initialization complete. Launching workers. 00:15:46.120 ======================================================== 00:15:46.120 Latency(us) 00:15:46.120 Device Information : IOPS MiB/s Average min max 00:15:46.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002103.14 1000211.99 1005509.77 00:15:46.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003615.56 1000233.94 1009866.02 00:15:46.120 ======================================================== 00:15:46.120 Total : 256.00 0.12 1002859.35 1000211.99 1009866.02 00:15:46.120 00:15:46.691 22:42:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:46.691 22:42:31 -- target/delete_subsystem.sh@57 -- # kill -0 1055023 00:15:46.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1055023) - No such process 00:15:46.691 22:42:31 -- target/delete_subsystem.sh@67 -- # wait 1055023 00:15:46.691 22:42:31 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:46.691 22:42:31 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:46.691 22:42:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:46.691 22:42:31 -- nvmf/common.sh@116 -- # sync 00:15:46.691 22:42:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:46.691 22:42:31 -- nvmf/common.sh@119 -- # set +e 00:15:46.691 22:42:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:46.691 22:42:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:46.691 rmmod nvme_tcp 00:15:46.691 rmmod nvme_fabrics 00:15:46.691 rmmod nvme_keyring 00:15:46.691 22:42:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:46.691 22:42:31 -- nvmf/common.sh@123 -- # set -e 00:15:46.691 22:42:31 -- nvmf/common.sh@124 -- # return 0 00:15:46.691 22:42:31 -- nvmf/common.sh@477 -- # '[' -n 1054231 ']' 00:15:46.691 22:42:31 -- nvmf/common.sh@478 -- # killprocess 1054231 00:15:46.691 22:42:31 -- common/autotest_common.sh@926 -- # '[' -z 1054231 ']' 00:15:46.691 22:42:31 -- common/autotest_common.sh@930 -- # kill -0 1054231 00:15:46.691 22:42:31 -- common/autotest_common.sh@931 -- # uname 00:15:46.691 22:42:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:46.691 22:42:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1054231 00:15:46.691 22:42:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:46.691 22:42:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:46.691 22:42:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1054231' 00:15:46.691 killing process with pid 1054231 00:15:46.691 22:42:31 -- common/autotest_common.sh@945 -- # kill 1054231 00:15:46.691 22:42:31 -- common/autotest_common.sh@950 -- # wait 1054231 00:15:46.691 22:42:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:46.691 22:42:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:46.691 22:42:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:46.691 22:42:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:46.691 22:42:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:46.691 22:42:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.691 22:42:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.691 22:42:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.237 22:42:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:49.237 00:15:49.237 real 0m18.654s 00:15:49.237 user 0m30.644s 00:15:49.237 sys 0m6.833s 00:15:49.237 22:42:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:49.237 22:42:33 -- common/autotest_common.sh@10 -- # set +x 00:15:49.237 ************************************ 00:15:49.237 END TEST nvmf_delete_subsystem 00:15:49.237 ************************************ 00:15:49.237 22:42:33 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:15:49.237 22:42:33 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:49.237 22:42:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:49.237 22:42:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:49.237 22:42:33 -- common/autotest_common.sh@10 -- # set +x 00:15:49.237 ************************************ 00:15:49.237 START TEST nvmf_nvme_cli 00:15:49.237 ************************************ 00:15:49.237 22:42:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:49.237 * Looking for test storage... 00:15:49.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.237 22:42:33 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.237 22:42:33 -- nvmf/common.sh@7 -- # uname -s 00:15:49.237 22:42:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.237 22:42:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.237 22:42:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.237 22:42:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.237 22:42:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.237 22:42:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.237 22:42:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.237 22:42:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.237 22:42:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.237 22:42:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.237 22:42:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:49.237 22:42:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:49.237 22:42:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.237 22:42:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.237 22:42:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.237 22:42:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.237 22:42:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.237 22:42:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.237 22:42:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.237 22:42:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.237 22:42:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.237 22:42:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.237 22:42:33 -- paths/export.sh@5 -- # export PATH 00:15:49.237 22:42:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.237 22:42:33 -- nvmf/common.sh@46 -- # : 0 00:15:49.237 22:42:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:49.237 22:42:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:49.237 22:42:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:49.237 22:42:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.237 22:42:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.237 22:42:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:49.237 22:42:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:49.237 22:42:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:49.237 22:42:33 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:49.237 22:42:33 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:49.237 22:42:33 -- target/nvme_cli.sh@14 -- # devs=() 00:15:49.237 22:42:33 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:49.237 22:42:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:49.237 22:42:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.237 22:42:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:49.237 22:42:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:49.237 22:42:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:49.237 22:42:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.237 22:42:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.237 22:42:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.237 22:42:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:49.238 22:42:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:49.238 22:42:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:49.238 22:42:33 -- common/autotest_common.sh@10 -- # set +x 00:15:57.381 22:42:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:57.381 22:42:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:57.381 22:42:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:57.381 22:42:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:57.381 22:42:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:57.381 22:42:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:57.381 22:42:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:57.381 22:42:41 -- nvmf/common.sh@294 -- # net_devs=() 00:15:57.381 22:42:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:57.381 22:42:41 -- nvmf/common.sh@295 -- # e810=() 00:15:57.381 22:42:41 -- nvmf/common.sh@295 -- # local -ga e810 00:15:57.381 22:42:41 -- nvmf/common.sh@296 -- # x722=() 00:15:57.381 22:42:41 -- nvmf/common.sh@296 -- # local -ga x722 00:15:57.381 22:42:41 -- nvmf/common.sh@297 -- # mlx=() 00:15:57.381 22:42:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:57.381 22:42:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:57.381 22:42:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:57.381 22:42:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:57.381 22:42:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:57.381 22:42:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:57.381 22:42:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:57.381 22:42:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:57.381 22:42:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:57.381 22:42:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:57.381 22:42:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:57.381 22:42:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:57.381 22:42:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:57.381 22:42:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:57.381 22:42:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:57.381 22:42:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:57.381 22:42:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:57.381 22:42:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:57.381 22:42:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:57.381 22:42:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:57.381 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:57.381 22:42:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:57.381 22:42:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:57.381 22:42:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.381 22:42:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.381 22:42:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:57.381 22:42:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:57.381 22:42:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:57.381 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:57.381 22:42:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:57.381 22:42:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:57.381 22:42:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.381 22:42:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.381 22:42:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:57.381 22:42:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:57.381 22:42:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:57.381 22:42:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:57.381 22:42:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:57.381 22:42:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.381 22:42:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:57.381 22:42:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.381 22:42:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:57.381 Found net devices under 0000:31:00.0: cvl_0_0 00:15:57.381 22:42:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.381 22:42:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:57.381 22:42:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.381 22:42:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:57.381 22:42:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.381 22:42:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:57.381 Found net devices under 0000:31:00.1: cvl_0_1 00:15:57.381 22:42:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.381 22:42:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:57.381 22:42:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:57.381 22:42:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:57.381 22:42:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:57.381 22:42:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:57.381 22:42:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.381 22:42:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:57.381 22:42:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:57.381 22:42:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:57.381 22:42:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:57.381 22:42:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:57.381 22:42:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:57.381 22:42:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:57.381 22:42:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.381 22:42:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:57.381 22:42:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:57.381 22:42:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:57.381 22:42:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:57.381 22:42:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:57.381 22:42:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:57.381 22:42:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:57.381 22:42:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:57.381 22:42:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:57.381 22:42:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:57.381 22:42:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:57.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:15:57.381 00:15:57.381 --- 10.0.0.2 ping statistics --- 00:15:57.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.381 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:15:57.381 22:42:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:57.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:15:57.381 00:15:57.381 --- 10.0.0.1 ping statistics --- 00:15:57.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.382 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:15:57.382 22:42:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.382 22:42:42 -- nvmf/common.sh@410 -- # return 0 00:15:57.382 22:42:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:57.382 22:42:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.382 22:42:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:57.382 22:42:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:57.382 22:42:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.382 22:42:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:57.382 22:42:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:57.382 22:42:42 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:57.382 22:42:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:57.382 22:42:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:57.382 22:42:42 -- common/autotest_common.sh@10 -- # set +x 00:15:57.382 22:42:42 -- nvmf/common.sh@469 -- # nvmfpid=1060650 00:15:57.382 22:42:42 -- nvmf/common.sh@470 -- # waitforlisten 1060650 00:15:57.382 22:42:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:57.382 22:42:42 -- common/autotest_common.sh@819 -- # '[' -z 1060650 ']' 00:15:57.382 22:42:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.382 22:42:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:57.382 22:42:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.382 22:42:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:57.382 22:42:42 -- common/autotest_common.sh@10 -- # set +x 00:15:57.382 [2024-04-15 22:42:42.138751] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:15:57.382 [2024-04-15 22:42:42.138801] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.382 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.641 [2024-04-15 22:42:42.211902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:57.641 [2024-04-15 22:42:42.275678] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:57.641 [2024-04-15 22:42:42.275816] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.642 [2024-04-15 22:42:42.275826] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.642 [2024-04-15 22:42:42.275834] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.642 [2024-04-15 22:42:42.275963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.642 [2024-04-15 22:42:42.276079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.642 [2024-04-15 22:42:42.276236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.642 [2024-04-15 22:42:42.276237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.273 22:42:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:58.273 22:42:42 -- common/autotest_common.sh@852 -- # return 0 00:15:58.273 22:42:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:58.273 22:42:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:58.273 22:42:42 -- common/autotest_common.sh@10 -- # set +x 00:15:58.273 22:42:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.273 22:42:42 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:58.273 22:42:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.273 22:42:42 -- common/autotest_common.sh@10 -- # set +x 00:15:58.273 [2024-04-15 22:42:42.963747] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.273 22:42:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.273 22:42:42 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:58.273 22:42:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.273 22:42:42 -- common/autotest_common.sh@10 -- # set +x 00:15:58.273 Malloc0 00:15:58.273 22:42:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.273 22:42:42 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:58.273 22:42:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.273 22:42:42 -- common/autotest_common.sh@10 -- # set +x 00:15:58.273 Malloc1 00:15:58.273 22:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.273 22:42:43 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:58.273 22:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.273 22:42:43 -- common/autotest_common.sh@10 -- # set +x 00:15:58.273 22:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.273 22:42:43 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:58.273 22:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.273 22:42:43 -- common/autotest_common.sh@10 -- # set +x 00:15:58.273 22:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.273 22:42:43 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:58.273 22:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.273 22:42:43 -- common/autotest_common.sh@10 -- # set +x 00:15:58.273 22:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.273 22:42:43 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:58.273 22:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.273 22:42:43 -- common/autotest_common.sh@10 -- # set +x 00:15:58.273 [2024-04-15 22:42:43.049805] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.273 22:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.273 22:42:43 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:58.273 22:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.273 22:42:43 -- common/autotest_common.sh@10 -- # set +x 00:15:58.273 22:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.273 22:42:43 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:15:58.534 00:15:58.534 Discovery Log Number of Records 2, Generation counter 2 00:15:58.534 =====Discovery Log Entry 0====== 00:15:58.534 trtype: tcp 00:15:58.534 adrfam: ipv4 00:15:58.534 subtype: current discovery subsystem 00:15:58.534 treq: not required 00:15:58.534 portid: 0 00:15:58.534 trsvcid: 4420 00:15:58.534 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:58.534 traddr: 10.0.0.2 00:15:58.534 eflags: explicit discovery connections, duplicate discovery information 00:15:58.534 sectype: none 00:15:58.534 =====Discovery Log Entry 1====== 00:15:58.534 trtype: tcp 00:15:58.534 adrfam: ipv4 00:15:58.534 subtype: nvme subsystem 00:15:58.534 treq: not required 00:15:58.534 portid: 0 00:15:58.535 trsvcid: 4420 00:15:58.535 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:58.535 traddr: 10.0.0.2 00:15:58.535 eflags: none 00:15:58.535 sectype: none 00:15:58.535 22:42:43 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:58.535 22:42:43 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:58.535 22:42:43 -- nvmf/common.sh@510 -- # local dev _ 00:15:58.535 22:42:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:58.535 22:42:43 -- nvmf/common.sh@509 -- # nvme list 00:15:58.535 22:42:43 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:58.535 22:42:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:58.535 22:42:43 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:58.535 22:42:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:58.535 22:42:43 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:58.535 22:42:43 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:59.919 22:42:44 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:59.919 22:42:44 -- common/autotest_common.sh@1177 -- # local i=0 00:15:59.919 22:42:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.919 22:42:44 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:15:59.919 22:42:44 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:15:59.919 22:42:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:02.462 22:42:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:02.462 22:42:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:02.462 22:42:46 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:02.462 22:42:46 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:16:02.462 22:42:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:02.462 22:42:46 -- common/autotest_common.sh@1187 -- # return 0 00:16:02.462 22:42:46 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:02.462 22:42:46 -- nvmf/common.sh@510 -- # local dev _ 00:16:02.462 22:42:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:02.462 22:42:46 -- nvmf/common.sh@509 -- # nvme list 00:16:02.462 22:42:46 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:02.462 22:42:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:02.462 22:42:46 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:02.462 22:42:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:02.462 22:42:46 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:02.462 22:42:46 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:16:02.462 22:42:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:02.462 22:42:46 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:02.462 22:42:46 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:16:02.462 22:42:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:02.462 22:42:46 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:02.462 /dev/nvme0n1 ]] 00:16:02.462 22:42:46 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:02.462 22:42:46 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:02.462 22:42:46 -- nvmf/common.sh@510 -- # local dev _ 00:16:02.462 22:42:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:02.462 22:42:46 -- nvmf/common.sh@509 -- # nvme list 00:16:02.462 22:42:47 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:02.462 22:42:47 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:02.462 22:42:47 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:02.462 22:42:47 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:02.462 22:42:47 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:02.462 22:42:47 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:16:02.462 22:42:47 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:02.462 22:42:47 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:02.462 22:42:47 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:16:02.462 22:42:47 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:02.462 22:42:47 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:02.462 22:42:47 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:02.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.723 22:42:47 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:02.723 22:42:47 -- common/autotest_common.sh@1198 -- # local i=0 00:16:02.723 22:42:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:02.723 22:42:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:02.723 22:42:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:02.723 22:42:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:02.723 22:42:47 -- common/autotest_common.sh@1210 -- # return 0 00:16:02.723 22:42:47 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:02.723 22:42:47 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:02.723 22:42:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:02.723 22:42:47 -- common/autotest_common.sh@10 -- # set +x 00:16:02.723 22:42:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:02.723 22:42:47 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:02.723 22:42:47 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:02.723 22:42:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:02.723 22:42:47 -- nvmf/common.sh@116 -- # sync 00:16:02.723 22:42:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:02.723 22:42:47 -- nvmf/common.sh@119 -- # set +e 00:16:02.723 22:42:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:02.723 22:42:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:02.723 rmmod nvme_tcp 00:16:02.723 rmmod nvme_fabrics 00:16:02.723 rmmod nvme_keyring 00:16:02.723 22:42:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:02.723 22:42:47 -- nvmf/common.sh@123 -- # set -e 00:16:02.723 22:42:47 -- nvmf/common.sh@124 -- # return 0 00:16:02.723 22:42:47 -- nvmf/common.sh@477 -- # '[' -n 1060650 ']' 00:16:02.723 22:42:47 -- nvmf/common.sh@478 -- # killprocess 1060650 00:16:02.723 22:42:47 -- common/autotest_common.sh@926 -- # '[' -z 1060650 ']' 00:16:02.723 22:42:47 -- common/autotest_common.sh@930 -- # kill -0 1060650 00:16:02.723 22:42:47 -- common/autotest_common.sh@931 -- # uname 00:16:02.723 22:42:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:02.723 22:42:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1060650 00:16:02.723 22:42:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:02.723 22:42:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:02.723 22:42:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1060650' 00:16:02.723 killing process with pid 1060650 00:16:02.723 22:42:47 -- common/autotest_common.sh@945 -- # kill 1060650 00:16:02.723 22:42:47 -- common/autotest_common.sh@950 -- # wait 1060650 00:16:02.984 22:42:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:02.984 22:42:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:02.984 22:42:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:02.984 22:42:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.984 22:42:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:02.984 22:42:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.984 22:42:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.984 22:42:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.895 22:42:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:04.895 00:16:04.895 real 0m16.080s 00:16:04.895 user 0m23.686s 00:16:04.895 sys 0m6.708s 00:16:04.895 22:42:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.895 22:42:49 -- common/autotest_common.sh@10 -- # set +x 00:16:04.895 ************************************ 00:16:04.895 END TEST nvmf_nvme_cli 00:16:04.895 ************************************ 00:16:05.156 22:42:49 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:16:05.156 22:42:49 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:05.156 22:42:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:05.156 22:42:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:05.156 22:42:49 -- common/autotest_common.sh@10 -- # set +x 00:16:05.156 ************************************ 00:16:05.156 START TEST nvmf_host_management 00:16:05.156 ************************************ 00:16:05.156 22:42:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:05.156 * Looking for test storage... 00:16:05.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:05.156 22:42:49 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:05.156 22:42:49 -- nvmf/common.sh@7 -- # uname -s 00:16:05.156 22:42:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.156 22:42:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.156 22:42:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.156 22:42:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.156 22:42:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.156 22:42:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.156 22:42:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.156 22:42:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.156 22:42:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.156 22:42:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.156 22:42:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:05.156 22:42:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:05.156 22:42:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.156 22:42:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.156 22:42:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:05.156 22:42:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:05.156 22:42:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.156 22:42:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.156 22:42:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.156 22:42:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.156 22:42:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.156 22:42:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.156 22:42:49 -- paths/export.sh@5 -- # export PATH 00:16:05.156 22:42:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.156 22:42:49 -- nvmf/common.sh@46 -- # : 0 00:16:05.156 22:42:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:05.156 22:42:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:05.156 22:42:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:05.156 22:42:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.156 22:42:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.156 22:42:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:05.156 22:42:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:05.157 22:42:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:05.157 22:42:49 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:05.157 22:42:49 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:05.157 22:42:49 -- target/host_management.sh@104 -- # nvmftestinit 00:16:05.157 22:42:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:05.157 22:42:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.157 22:42:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:05.157 22:42:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:05.157 22:42:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:05.157 22:42:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.157 22:42:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.157 22:42:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.157 22:42:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:05.157 22:42:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:05.157 22:42:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:05.157 22:42:49 -- common/autotest_common.sh@10 -- # set +x 00:16:13.299 22:42:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:13.299 22:42:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:13.299 22:42:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:13.299 22:42:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:13.299 22:42:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:13.299 22:42:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:13.299 22:42:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:13.299 22:42:57 -- nvmf/common.sh@294 -- # net_devs=() 00:16:13.299 22:42:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:13.299 22:42:57 -- nvmf/common.sh@295 -- # e810=() 00:16:13.299 22:42:57 -- nvmf/common.sh@295 -- # local -ga e810 00:16:13.299 22:42:57 -- nvmf/common.sh@296 -- # x722=() 00:16:13.299 22:42:57 -- nvmf/common.sh@296 -- # local -ga x722 00:16:13.299 22:42:57 -- nvmf/common.sh@297 -- # mlx=() 00:16:13.299 22:42:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:13.299 22:42:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:13.299 22:42:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:13.299 22:42:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:13.299 22:42:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:13.299 22:42:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:13.299 22:42:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:13.299 22:42:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:13.299 22:42:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:13.299 22:42:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:13.299 22:42:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:13.299 22:42:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:13.299 22:42:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:13.299 22:42:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:13.299 22:42:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:13.299 22:42:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:13.299 22:42:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:13.299 22:42:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:13.299 22:42:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:13.299 22:42:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:13.299 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:13.299 22:42:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:13.299 22:42:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:13.299 22:42:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:13.299 22:42:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:13.299 22:42:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:13.299 22:42:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:13.299 22:42:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:13.299 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:13.299 22:42:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:13.299 22:42:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:13.299 22:42:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:13.299 22:42:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:13.299 22:42:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:13.299 22:42:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:13.299 22:42:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:13.299 22:42:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:13.299 22:42:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:13.299 22:42:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.300 22:42:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:13.300 22:42:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.300 22:42:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:13.300 Found net devices under 0000:31:00.0: cvl_0_0 00:16:13.300 22:42:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.300 22:42:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:13.300 22:42:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.300 22:42:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:13.300 22:42:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.300 22:42:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:13.300 Found net devices under 0000:31:00.1: cvl_0_1 00:16:13.300 22:42:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.300 22:42:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:13.300 22:42:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:13.300 22:42:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:13.300 22:42:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:13.300 22:42:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:13.300 22:42:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:13.300 22:42:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:13.300 22:42:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:13.300 22:42:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:13.300 22:42:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:13.300 22:42:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:13.300 22:42:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:13.300 22:42:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:13.300 22:42:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:13.300 22:42:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:13.300 22:42:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:13.300 22:42:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:13.300 22:42:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:13.300 22:42:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:13.300 22:42:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:13.300 22:42:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:13.300 22:42:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:13.300 22:42:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:13.300 22:42:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:13.300 22:42:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:13.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:16:13.300 00:16:13.300 --- 10.0.0.2 ping statistics --- 00:16:13.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.300 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:16:13.300 22:42:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:13.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:16:13.300 00:16:13.300 --- 10.0.0.1 ping statistics --- 00:16:13.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.300 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:16:13.300 22:42:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.300 22:42:57 -- nvmf/common.sh@410 -- # return 0 00:16:13.300 22:42:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:13.300 22:42:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.300 22:42:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:13.300 22:42:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:13.300 22:42:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.300 22:42:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:13.300 22:42:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:13.300 22:42:57 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:16:13.300 22:42:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:13.300 22:42:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:13.300 22:42:57 -- common/autotest_common.sh@10 -- # set +x 00:16:13.300 ************************************ 00:16:13.300 START TEST nvmf_host_management 00:16:13.300 ************************************ 00:16:13.300 22:42:57 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:16:13.300 22:42:57 -- target/host_management.sh@69 -- # starttarget 00:16:13.300 22:42:57 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:13.300 22:42:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:13.300 22:42:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:13.300 22:42:57 -- common/autotest_common.sh@10 -- # set +x 00:16:13.300 22:42:57 -- nvmf/common.sh@469 -- # nvmfpid=1066427 00:16:13.300 22:42:57 -- nvmf/common.sh@470 -- # waitforlisten 1066427 00:16:13.300 22:42:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:13.300 22:42:57 -- common/autotest_common.sh@819 -- # '[' -z 1066427 ']' 00:16:13.300 22:42:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.300 22:42:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:13.300 22:42:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.300 22:42:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:13.300 22:42:57 -- common/autotest_common.sh@10 -- # set +x 00:16:13.300 [2024-04-15 22:42:58.006268] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:13.300 [2024-04-15 22:42:58.006332] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.300 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.300 [2024-04-15 22:42:58.084516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:13.560 [2024-04-15 22:42:58.156391] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:13.560 [2024-04-15 22:42:58.156528] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.560 [2024-04-15 22:42:58.156537] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.560 [2024-04-15 22:42:58.156551] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.560 [2024-04-15 22:42:58.156692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.560 [2024-04-15 22:42:58.156884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:13.560 [2024-04-15 22:42:58.157043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.560 [2024-04-15 22:42:58.157044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:14.132 22:42:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:14.132 22:42:58 -- common/autotest_common.sh@852 -- # return 0 00:16:14.132 22:42:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:14.132 22:42:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:14.132 22:42:58 -- common/autotest_common.sh@10 -- # set +x 00:16:14.132 22:42:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.132 22:42:58 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:14.132 22:42:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.132 22:42:58 -- common/autotest_common.sh@10 -- # set +x 00:16:14.132 [2024-04-15 22:42:58.826716] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.132 22:42:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.132 22:42:58 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:14.132 22:42:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:14.132 22:42:58 -- common/autotest_common.sh@10 -- # set +x 00:16:14.132 22:42:58 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:14.132 22:42:58 -- target/host_management.sh@23 -- # cat 00:16:14.132 22:42:58 -- target/host_management.sh@30 -- # rpc_cmd 00:16:14.132 22:42:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.132 22:42:58 -- common/autotest_common.sh@10 -- # set +x 00:16:14.132 Malloc0 00:16:14.132 [2024-04-15 22:42:58.886192] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.132 22:42:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.132 22:42:58 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:14.132 22:42:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:14.132 22:42:58 -- common/autotest_common.sh@10 -- # set +x 00:16:14.132 22:42:58 -- target/host_management.sh@73 -- # perfpid=1066600 00:16:14.132 22:42:58 -- target/host_management.sh@74 -- # waitforlisten 1066600 /var/tmp/bdevperf.sock 00:16:14.132 22:42:58 -- common/autotest_common.sh@819 -- # '[' -z 1066600 ']' 00:16:14.132 22:42:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:14.132 22:42:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:14.393 22:42:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:14.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:14.394 22:42:58 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:14.394 22:42:58 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:14.394 22:42:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:14.394 22:42:58 -- common/autotest_common.sh@10 -- # set +x 00:16:14.394 22:42:58 -- nvmf/common.sh@520 -- # config=() 00:16:14.394 22:42:58 -- nvmf/common.sh@520 -- # local subsystem config 00:16:14.394 22:42:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:14.394 22:42:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:14.394 { 00:16:14.394 "params": { 00:16:14.394 "name": "Nvme$subsystem", 00:16:14.394 "trtype": "$TEST_TRANSPORT", 00:16:14.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:14.394 "adrfam": "ipv4", 00:16:14.394 "trsvcid": "$NVMF_PORT", 00:16:14.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:14.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:14.394 "hdgst": ${hdgst:-false}, 00:16:14.394 "ddgst": ${ddgst:-false} 00:16:14.394 }, 00:16:14.394 "method": "bdev_nvme_attach_controller" 00:16:14.394 } 00:16:14.394 EOF 00:16:14.394 )") 00:16:14.394 22:42:58 -- nvmf/common.sh@542 -- # cat 00:16:14.394 22:42:58 -- nvmf/common.sh@544 -- # jq . 00:16:14.394 22:42:58 -- nvmf/common.sh@545 -- # IFS=, 00:16:14.394 22:42:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:14.394 "params": { 00:16:14.394 "name": "Nvme0", 00:16:14.394 "trtype": "tcp", 00:16:14.394 "traddr": "10.0.0.2", 00:16:14.394 "adrfam": "ipv4", 00:16:14.394 "trsvcid": "4420", 00:16:14.394 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:14.394 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:14.394 "hdgst": false, 00:16:14.394 "ddgst": false 00:16:14.394 }, 00:16:14.394 "method": "bdev_nvme_attach_controller" 00:16:14.394 }' 00:16:14.394 [2024-04-15 22:42:58.980869] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:14.394 [2024-04-15 22:42:58.980919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066600 ] 00:16:14.394 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.394 [2024-04-15 22:42:59.046641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.394 [2024-04-15 22:42:59.109595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.655 Running I/O for 10 seconds... 00:16:15.233 22:42:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:15.233 22:42:59 -- common/autotest_common.sh@852 -- # return 0 00:16:15.233 22:42:59 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:15.233 22:42:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:15.233 22:42:59 -- common/autotest_common.sh@10 -- # set +x 00:16:15.233 22:42:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:15.233 22:42:59 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:15.233 22:42:59 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:15.233 22:42:59 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:15.233 22:42:59 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:15.233 22:42:59 -- target/host_management.sh@52 -- # local ret=1 00:16:15.233 22:42:59 -- target/host_management.sh@53 -- # local i 00:16:15.233 22:42:59 -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:15.233 22:42:59 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:15.233 22:42:59 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:15.233 22:42:59 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:15.233 22:42:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:15.233 22:42:59 -- common/autotest_common.sh@10 -- # set +x 00:16:15.233 22:42:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:15.233 22:42:59 -- target/host_management.sh@55 -- # read_io_count=947 00:16:15.233 22:42:59 -- target/host_management.sh@58 -- # '[' 947 -ge 100 ']' 00:16:15.233 22:42:59 -- target/host_management.sh@59 -- # ret=0 00:16:15.233 22:42:59 -- target/host_management.sh@60 -- # break 00:16:15.233 22:42:59 -- target/host_management.sh@64 -- # return 0 00:16:15.233 22:42:59 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:15.233 22:42:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:15.233 22:42:59 -- common/autotest_common.sh@10 -- # set +x 00:16:15.233 [2024-04-15 22:42:59.829340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.233 [2024-04-15 22:42:59.829388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.233 [2024-04-15 22:42:59.829395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.233 [2024-04-15 22:42:59.829402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.233 [2024-04-15 22:42:59.829408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.233 [2024-04-15 22:42:59.829415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.233 [2024-04-15 22:42:59.829421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.233 [2024-04-15 22:42:59.829427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.233 [2024-04-15 22:42:59.829434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.233 [2024-04-15 22:42:59.829440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.233 [2024-04-15 22:42:59.829446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.233 [2024-04-15 22:42:59.829452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.233 [2024-04-15 22:42:59.829459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.233 [2024-04-15 22:42:59.829465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.233 [2024-04-15 22:42:59.829471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.234 [2024-04-15 22:42:59.829477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.234 [2024-04-15 22:42:59.829484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.234 [2024-04-15 22:42:59.829490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.234 [2024-04-15 22:42:59.829497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.234 [2024-04-15 22:42:59.829503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.234 [2024-04-15 22:42:59.829510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.234 [2024-04-15 22:42:59.829516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.234 [2024-04-15 22:42:59.829522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.234 [2024-04-15 22:42:59.829529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.234 [2024-04-15 22:42:59.829535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.234 [2024-04-15 22:42:59.829541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.234 [2024-04-15 22:42:59.829561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.234 [2024-04-15 22:42:59.829568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.234 [2024-04-15 22:42:59.829574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.234 [2024-04-15 22:42:59.829580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.234 [2024-04-15 22:42:59.829587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afccc0 is same with the state(5) to be set 00:16:15.234 [2024-04-15 22:42:59.831882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.831920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.831936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.831944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.831954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.831961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.831970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.831977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.831986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.831993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.234 [2024-04-15 22:42:59.832417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.234 [2024-04-15 22:42:59.832426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.832954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.235 [2024-04-15 22:42:59.832961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.235 [2024-04-15 22:42:59.833016] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19dd110 was disconnected and freed. reset controller. 00:16:15.235 [2024-04-15 22:42:59.834207] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:15.235 22:42:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:15.235 22:42:59 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:15.235 22:42:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:15.235 22:42:59 -- common/autotest_common.sh@10 -- # set +x 00:16:15.235 task offset: 4224 on job bdev=Nvme0n1 fails 00:16:15.235 00:16:15.235 Latency(us) 00:16:15.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.235 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:15.235 Job: Nvme0n1 ended in about 0.42 seconds with error 00:16:15.235 Verification LBA range: start 0x0 length 0x400 00:16:15.235 Nvme0n1 : 0.42 2561.24 160.08 152.48 0.00 23178.20 1467.73 26105.17 00:16:15.235 =================================================================================================================== 00:16:15.235 Total : 2561.24 160.08 152.48 0.00 23178.20 1467.73 26105.17 00:16:15.235 [2024-04-15 22:42:59.836188] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:15.235 [2024-04-15 22:42:59.836210] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19df450 (9): Bad file descriptor 00:16:15.235 [2024-04-15 22:42:59.838932] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:16:15.235 [2024-04-15 22:42:59.839025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:15.235 [2024-04-15 22:42:59.839053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.236 [2024-04-15 22:42:59.839069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:16:15.236 [2024-04-15 22:42:59.839077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:16:15.236 [2024-04-15 22:42:59.839084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:16:15.236 [2024-04-15 22:42:59.839091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19df450 00:16:15.236 [2024-04-15 22:42:59.839112] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19df450 (9): Bad file descriptor 00:16:15.236 [2024-04-15 22:42:59.839125] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:15.236 [2024-04-15 22:42:59.839136] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:15.236 [2024-04-15 22:42:59.839145] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:15.236 [2024-04-15 22:42:59.839157] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:15.236 22:42:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:15.236 22:42:59 -- target/host_management.sh@87 -- # sleep 1 00:16:16.200 22:43:00 -- target/host_management.sh@91 -- # kill -9 1066600 00:16:16.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1066600) - No such process 00:16:16.200 22:43:00 -- target/host_management.sh@91 -- # true 00:16:16.200 22:43:00 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:16.200 22:43:00 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:16.200 22:43:00 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:16.200 22:43:00 -- nvmf/common.sh@520 -- # config=() 00:16:16.200 22:43:00 -- nvmf/common.sh@520 -- # local subsystem config 00:16:16.200 22:43:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:16.200 22:43:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:16.200 { 00:16:16.200 "params": { 00:16:16.200 "name": "Nvme$subsystem", 00:16:16.200 "trtype": "$TEST_TRANSPORT", 00:16:16.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:16.200 "adrfam": "ipv4", 00:16:16.200 "trsvcid": "$NVMF_PORT", 00:16:16.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:16.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:16.200 "hdgst": ${hdgst:-false}, 00:16:16.200 "ddgst": ${ddgst:-false} 00:16:16.200 }, 00:16:16.200 "method": "bdev_nvme_attach_controller" 00:16:16.200 } 00:16:16.200 EOF 00:16:16.200 )") 00:16:16.200 22:43:00 -- nvmf/common.sh@542 -- # cat 00:16:16.200 22:43:00 -- nvmf/common.sh@544 -- # jq . 00:16:16.200 22:43:00 -- nvmf/common.sh@545 -- # IFS=, 00:16:16.200 22:43:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:16.200 "params": { 00:16:16.200 "name": "Nvme0", 00:16:16.200 "trtype": "tcp", 00:16:16.200 "traddr": "10.0.0.2", 00:16:16.200 "adrfam": "ipv4", 00:16:16.200 "trsvcid": "4420", 00:16:16.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:16.200 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:16.200 "hdgst": false, 00:16:16.200 "ddgst": false 00:16:16.200 }, 00:16:16.200 "method": "bdev_nvme_attach_controller" 00:16:16.200 }' 00:16:16.200 [2024-04-15 22:43:00.910453] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:16.200 [2024-04-15 22:43:00.910521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1067042 ] 00:16:16.200 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.200 [2024-04-15 22:43:00.976418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.460 [2024-04-15 22:43:01.038987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.460 Running I/O for 1 seconds... 00:16:17.846 00:16:17.846 Latency(us) 00:16:17.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.846 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:17.846 Verification LBA range: start 0x0 length 0x400 00:16:17.846 Nvme0n1 : 1.01 2724.40 170.27 0.00 0.00 23171.37 3099.31 30365.01 00:16:17.846 =================================================================================================================== 00:16:17.846 Total : 2724.40 170.27 0.00 0.00 23171.37 3099.31 30365.01 00:16:17.846 22:43:02 -- target/host_management.sh@101 -- # stoptarget 00:16:17.846 22:43:02 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:17.846 22:43:02 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:17.846 22:43:02 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:17.846 22:43:02 -- target/host_management.sh@40 -- # nvmftestfini 00:16:17.846 22:43:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:17.846 22:43:02 -- nvmf/common.sh@116 -- # sync 00:16:17.846 22:43:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:17.846 22:43:02 -- nvmf/common.sh@119 -- # set +e 00:16:17.846 22:43:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:17.846 22:43:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:17.846 rmmod nvme_tcp 00:16:17.846 rmmod nvme_fabrics 00:16:17.846 rmmod nvme_keyring 00:16:17.846 22:43:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:17.846 22:43:02 -- nvmf/common.sh@123 -- # set -e 00:16:17.846 22:43:02 -- nvmf/common.sh@124 -- # return 0 00:16:17.846 22:43:02 -- nvmf/common.sh@477 -- # '[' -n 1066427 ']' 00:16:17.846 22:43:02 -- nvmf/common.sh@478 -- # killprocess 1066427 00:16:17.846 22:43:02 -- common/autotest_common.sh@926 -- # '[' -z 1066427 ']' 00:16:17.846 22:43:02 -- common/autotest_common.sh@930 -- # kill -0 1066427 00:16:17.846 22:43:02 -- common/autotest_common.sh@931 -- # uname 00:16:17.846 22:43:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:17.846 22:43:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1066427 00:16:17.846 22:43:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:17.847 22:43:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:17.847 22:43:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1066427' 00:16:17.847 killing process with pid 1066427 00:16:17.847 22:43:02 -- common/autotest_common.sh@945 -- # kill 1066427 00:16:17.847 22:43:02 -- common/autotest_common.sh@950 -- # wait 1066427 00:16:18.108 [2024-04-15 22:43:02.665406] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:18.108 22:43:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:18.108 22:43:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:18.108 22:43:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:18.108 22:43:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:18.108 22:43:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:18.108 22:43:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.108 22:43:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.108 22:43:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.019 22:43:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:20.019 00:16:20.019 real 0m6.812s 00:16:20.019 user 0m20.564s 00:16:20.019 sys 0m1.070s 00:16:20.019 22:43:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:20.019 22:43:04 -- common/autotest_common.sh@10 -- # set +x 00:16:20.019 ************************************ 00:16:20.019 END TEST nvmf_host_management 00:16:20.019 ************************************ 00:16:20.019 22:43:04 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:20.019 00:16:20.019 real 0m15.071s 00:16:20.019 user 0m22.782s 00:16:20.019 sys 0m7.042s 00:16:20.019 22:43:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:20.019 22:43:04 -- common/autotest_common.sh@10 -- # set +x 00:16:20.019 ************************************ 00:16:20.019 END TEST nvmf_host_management 00:16:20.019 ************************************ 00:16:20.280 22:43:04 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:20.280 22:43:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:20.280 22:43:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:20.280 22:43:04 -- common/autotest_common.sh@10 -- # set +x 00:16:20.280 ************************************ 00:16:20.280 START TEST nvmf_lvol 00:16:20.280 ************************************ 00:16:20.280 22:43:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:20.280 * Looking for test storage... 00:16:20.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:20.280 22:43:04 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:20.280 22:43:04 -- nvmf/common.sh@7 -- # uname -s 00:16:20.280 22:43:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.280 22:43:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.280 22:43:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.280 22:43:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.280 22:43:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.280 22:43:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.280 22:43:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.280 22:43:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.280 22:43:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.280 22:43:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.280 22:43:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:20.280 22:43:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:20.280 22:43:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.280 22:43:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.280 22:43:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:20.280 22:43:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:20.280 22:43:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.280 22:43:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.280 22:43:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.280 22:43:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.280 22:43:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.280 22:43:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.280 22:43:04 -- paths/export.sh@5 -- # export PATH 00:16:20.280 22:43:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.280 22:43:04 -- nvmf/common.sh@46 -- # : 0 00:16:20.280 22:43:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:20.280 22:43:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:20.280 22:43:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:20.280 22:43:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.280 22:43:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.280 22:43:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:20.280 22:43:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:20.280 22:43:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:20.280 22:43:04 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:20.280 22:43:04 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:20.280 22:43:04 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:20.280 22:43:04 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:20.280 22:43:04 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:20.280 22:43:04 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:20.280 22:43:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:20.280 22:43:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.280 22:43:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:20.280 22:43:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:20.280 22:43:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:20.280 22:43:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.280 22:43:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.280 22:43:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.280 22:43:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:20.280 22:43:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:20.280 22:43:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:20.280 22:43:04 -- common/autotest_common.sh@10 -- # set +x 00:16:28.427 22:43:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:28.427 22:43:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:28.427 22:43:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:28.427 22:43:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:28.427 22:43:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:28.427 22:43:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:28.427 22:43:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:28.427 22:43:12 -- nvmf/common.sh@294 -- # net_devs=() 00:16:28.427 22:43:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:28.427 22:43:12 -- nvmf/common.sh@295 -- # e810=() 00:16:28.427 22:43:12 -- nvmf/common.sh@295 -- # local -ga e810 00:16:28.427 22:43:12 -- nvmf/common.sh@296 -- # x722=() 00:16:28.427 22:43:12 -- nvmf/common.sh@296 -- # local -ga x722 00:16:28.427 22:43:12 -- nvmf/common.sh@297 -- # mlx=() 00:16:28.427 22:43:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:28.427 22:43:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:28.427 22:43:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:28.427 22:43:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:28.427 22:43:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:28.427 22:43:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:28.427 22:43:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:28.427 22:43:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:28.427 22:43:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:28.427 22:43:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:28.427 22:43:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:28.427 22:43:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:28.427 22:43:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:28.427 22:43:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:28.427 22:43:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:28.427 22:43:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:28.427 22:43:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:28.427 22:43:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:28.427 22:43:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:28.427 22:43:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:28.427 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:28.427 22:43:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:28.427 22:43:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:28.427 22:43:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.427 22:43:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.427 22:43:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:28.427 22:43:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:28.427 22:43:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:28.427 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:28.427 22:43:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:28.428 22:43:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:28.428 22:43:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.428 22:43:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.428 22:43:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:28.428 22:43:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:28.428 22:43:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:28.428 22:43:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:28.428 22:43:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:28.428 22:43:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.428 22:43:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:28.428 22:43:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.428 22:43:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:28.428 Found net devices under 0000:31:00.0: cvl_0_0 00:16:28.428 22:43:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.428 22:43:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:28.428 22:43:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.428 22:43:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:28.428 22:43:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.428 22:43:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:28.428 Found net devices under 0000:31:00.1: cvl_0_1 00:16:28.428 22:43:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.428 22:43:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:28.428 22:43:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:28.428 22:43:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:28.428 22:43:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:28.428 22:43:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:28.428 22:43:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.428 22:43:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:28.428 22:43:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:28.428 22:43:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:28.428 22:43:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:28.428 22:43:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:28.428 22:43:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:28.428 22:43:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:28.428 22:43:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.428 22:43:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:28.428 22:43:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:28.428 22:43:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:28.428 22:43:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:28.428 22:43:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:28.428 22:43:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:28.428 22:43:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:28.428 22:43:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:28.428 22:43:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:28.428 22:43:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:28.428 22:43:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:28.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:16:28.428 00:16:28.428 --- 10.0.0.2 ping statistics --- 00:16:28.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.428 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:16:28.428 22:43:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:28.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:16:28.428 00:16:28.428 --- 10.0.0.1 ping statistics --- 00:16:28.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.428 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:16:28.428 22:43:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.428 22:43:13 -- nvmf/common.sh@410 -- # return 0 00:16:28.428 22:43:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:28.428 22:43:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.428 22:43:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:28.428 22:43:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:28.428 22:43:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.428 22:43:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:28.428 22:43:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:28.428 22:43:13 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:28.428 22:43:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:28.428 22:43:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:28.428 22:43:13 -- common/autotest_common.sh@10 -- # set +x 00:16:28.428 22:43:13 -- nvmf/common.sh@469 -- # nvmfpid=1072155 00:16:28.428 22:43:13 -- nvmf/common.sh@470 -- # waitforlisten 1072155 00:16:28.428 22:43:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:28.428 22:43:13 -- common/autotest_common.sh@819 -- # '[' -z 1072155 ']' 00:16:28.428 22:43:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.428 22:43:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:28.428 22:43:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.428 22:43:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:28.428 22:43:13 -- common/autotest_common.sh@10 -- # set +x 00:16:28.428 [2024-04-15 22:43:13.219824] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:28.428 [2024-04-15 22:43:13.219894] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.689 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.689 [2024-04-15 22:43:13.300556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:28.689 [2024-04-15 22:43:13.373044] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:28.689 [2024-04-15 22:43:13.373172] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.689 [2024-04-15 22:43:13.373180] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.689 [2024-04-15 22:43:13.373187] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.689 [2024-04-15 22:43:13.373226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.689 [2024-04-15 22:43:13.373330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.689 [2024-04-15 22:43:13.373334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.261 22:43:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:29.261 22:43:13 -- common/autotest_common.sh@852 -- # return 0 00:16:29.261 22:43:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:29.261 22:43:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:29.261 22:43:13 -- common/autotest_common.sh@10 -- # set +x 00:16:29.261 22:43:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.261 22:43:14 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:29.519 [2024-04-15 22:43:14.181785] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.519 22:43:14 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:29.780 22:43:14 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:29.780 22:43:14 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:29.780 22:43:14 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:29.780 22:43:14 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:30.041 22:43:14 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:30.302 22:43:14 -- target/nvmf_lvol.sh@29 -- # lvs=3b4aadb4-bd75-4be3-a741-d757a53341be 00:16:30.302 22:43:14 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3b4aadb4-bd75-4be3-a741-d757a53341be lvol 20 00:16:30.302 22:43:15 -- target/nvmf_lvol.sh@32 -- # lvol=37aeaead-5d51-4b01-bf91-5e9177c1a79d 00:16:30.302 22:43:15 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:30.627 22:43:15 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 37aeaead-5d51-4b01-bf91-5e9177c1a79d 00:16:30.627 22:43:15 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:30.888 [2024-04-15 22:43:15.523664] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.888 22:43:15 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:31.150 22:43:15 -- target/nvmf_lvol.sh@42 -- # perf_pid=1072604 00:16:31.150 22:43:15 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:31.150 22:43:15 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:31.150 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.091 22:43:16 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 37aeaead-5d51-4b01-bf91-5e9177c1a79d MY_SNAPSHOT 00:16:32.351 22:43:16 -- target/nvmf_lvol.sh@47 -- # snapshot=959666c7-6286-422f-bddd-48730760fd06 00:16:32.351 22:43:16 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 37aeaead-5d51-4b01-bf91-5e9177c1a79d 30 00:16:32.351 22:43:17 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 959666c7-6286-422f-bddd-48730760fd06 MY_CLONE 00:16:32.612 22:43:17 -- target/nvmf_lvol.sh@49 -- # clone=4738080c-f568-486b-8720-fb8a241d2779 00:16:32.612 22:43:17 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4738080c-f568-486b-8720-fb8a241d2779 00:16:32.871 22:43:17 -- target/nvmf_lvol.sh@53 -- # wait 1072604 00:16:42.872 Initializing NVMe Controllers 00:16:42.872 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:42.872 Controller IO queue size 128, less than required. 00:16:42.872 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:42.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:42.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:42.872 Initialization complete. Launching workers. 00:16:42.872 ======================================================== 00:16:42.872 Latency(us) 00:16:42.872 Device Information : IOPS MiB/s Average min max 00:16:42.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12422.10 48.52 10308.44 1462.62 52844.63 00:16:42.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12530.10 48.95 10216.84 3895.08 65045.42 00:16:42.872 ======================================================== 00:16:42.872 Total : 24952.20 97.47 10262.44 1462.62 65045.42 00:16:42.872 00:16:42.872 22:43:26 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:42.872 22:43:26 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 37aeaead-5d51-4b01-bf91-5e9177c1a79d 00:16:42.872 22:43:26 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3b4aadb4-bd75-4be3-a741-d757a53341be 00:16:42.872 22:43:26 -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:42.872 22:43:26 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:42.872 22:43:26 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:42.872 22:43:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:42.872 22:43:26 -- nvmf/common.sh@116 -- # sync 00:16:42.872 22:43:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:42.872 22:43:26 -- nvmf/common.sh@119 -- # set +e 00:16:42.872 22:43:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:42.872 22:43:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:42.872 rmmod nvme_tcp 00:16:42.872 rmmod nvme_fabrics 00:16:42.872 rmmod nvme_keyring 00:16:42.872 22:43:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:42.872 22:43:26 -- nvmf/common.sh@123 -- # set -e 00:16:42.872 22:43:26 -- nvmf/common.sh@124 -- # return 0 00:16:42.872 22:43:26 -- nvmf/common.sh@477 -- # '[' -n 1072155 ']' 00:16:42.872 22:43:26 -- nvmf/common.sh@478 -- # killprocess 1072155 00:16:42.872 22:43:26 -- common/autotest_common.sh@926 -- # '[' -z 1072155 ']' 00:16:42.872 22:43:26 -- common/autotest_common.sh@930 -- # kill -0 1072155 00:16:42.872 22:43:26 -- common/autotest_common.sh@931 -- # uname 00:16:42.872 22:43:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:42.872 22:43:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1072155 00:16:42.872 22:43:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:42.872 22:43:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:42.872 22:43:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1072155' 00:16:42.872 killing process with pid 1072155 00:16:42.872 22:43:26 -- common/autotest_common.sh@945 -- # kill 1072155 00:16:42.872 22:43:26 -- common/autotest_common.sh@950 -- # wait 1072155 00:16:42.872 22:43:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:42.872 22:43:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:42.872 22:43:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:42.872 22:43:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:42.872 22:43:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:42.872 22:43:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.872 22:43:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.872 22:43:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.261 22:43:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:44.261 00:16:44.261 real 0m24.079s 00:16:44.261 user 1m3.833s 00:16:44.261 sys 0m8.316s 00:16:44.261 22:43:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:44.261 22:43:28 -- common/autotest_common.sh@10 -- # set +x 00:16:44.261 ************************************ 00:16:44.261 END TEST nvmf_lvol 00:16:44.261 ************************************ 00:16:44.261 22:43:28 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:44.262 22:43:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:44.262 22:43:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:44.262 22:43:28 -- common/autotest_common.sh@10 -- # set +x 00:16:44.262 ************************************ 00:16:44.262 START TEST nvmf_lvs_grow 00:16:44.262 ************************************ 00:16:44.262 22:43:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:44.262 * Looking for test storage... 00:16:44.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:44.262 22:43:29 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:44.262 22:43:29 -- nvmf/common.sh@7 -- # uname -s 00:16:44.262 22:43:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.262 22:43:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.262 22:43:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.262 22:43:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.262 22:43:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.262 22:43:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.262 22:43:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.262 22:43:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.262 22:43:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.262 22:43:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.523 22:43:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:44.523 22:43:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:44.523 22:43:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.523 22:43:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.523 22:43:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:44.523 22:43:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:44.523 22:43:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.523 22:43:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.523 22:43:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.523 22:43:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.523 22:43:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.523 22:43:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.523 22:43:29 -- paths/export.sh@5 -- # export PATH 00:16:44.523 22:43:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.523 22:43:29 -- nvmf/common.sh@46 -- # : 0 00:16:44.523 22:43:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:44.523 22:43:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:44.523 22:43:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:44.523 22:43:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.523 22:43:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.523 22:43:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:44.523 22:43:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:44.523 22:43:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:44.523 22:43:29 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.523 22:43:29 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:44.523 22:43:29 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:16:44.523 22:43:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:44.523 22:43:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.523 22:43:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:44.523 22:43:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:44.523 22:43:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:44.523 22:43:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.523 22:43:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.523 22:43:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.523 22:43:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:44.523 22:43:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:44.523 22:43:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:44.523 22:43:29 -- common/autotest_common.sh@10 -- # set +x 00:16:52.668 22:43:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:52.668 22:43:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:52.668 22:43:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:52.668 22:43:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:52.668 22:43:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:52.669 22:43:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:52.669 22:43:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:52.669 22:43:36 -- nvmf/common.sh@294 -- # net_devs=() 00:16:52.669 22:43:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:52.669 22:43:36 -- nvmf/common.sh@295 -- # e810=() 00:16:52.669 22:43:36 -- nvmf/common.sh@295 -- # local -ga e810 00:16:52.669 22:43:36 -- nvmf/common.sh@296 -- # x722=() 00:16:52.669 22:43:36 -- nvmf/common.sh@296 -- # local -ga x722 00:16:52.669 22:43:36 -- nvmf/common.sh@297 -- # mlx=() 00:16:52.669 22:43:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:52.669 22:43:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:52.669 22:43:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:52.669 22:43:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:52.669 22:43:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:52.669 22:43:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:52.669 22:43:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:52.669 22:43:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:52.669 22:43:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:52.669 22:43:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:52.669 22:43:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:52.669 22:43:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:52.669 22:43:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:52.669 22:43:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:52.669 22:43:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:52.669 22:43:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:52.669 22:43:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:52.669 22:43:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:52.669 22:43:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:52.669 22:43:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:52.669 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:52.669 22:43:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:52.669 22:43:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:52.669 22:43:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.669 22:43:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.669 22:43:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:52.669 22:43:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:52.669 22:43:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:52.669 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:52.669 22:43:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:52.669 22:43:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:52.669 22:43:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.669 22:43:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.669 22:43:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:52.669 22:43:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:52.669 22:43:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:52.669 22:43:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:52.669 22:43:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:52.669 22:43:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.669 22:43:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:52.669 22:43:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.669 22:43:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:52.669 Found net devices under 0000:31:00.0: cvl_0_0 00:16:52.669 22:43:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.669 22:43:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:52.669 22:43:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.669 22:43:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:52.669 22:43:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.669 22:43:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:52.669 Found net devices under 0000:31:00.1: cvl_0_1 00:16:52.669 22:43:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.669 22:43:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:52.669 22:43:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:52.669 22:43:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:52.669 22:43:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:52.669 22:43:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:52.669 22:43:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.669 22:43:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:52.669 22:43:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:52.669 22:43:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:52.669 22:43:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:52.669 22:43:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:52.669 22:43:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:52.669 22:43:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:52.669 22:43:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.669 22:43:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:52.669 22:43:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:52.669 22:43:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:52.669 22:43:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:52.669 22:43:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:52.669 22:43:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:52.669 22:43:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:52.669 22:43:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:52.669 22:43:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:52.669 22:43:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:52.669 22:43:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:52.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:16:52.669 00:16:52.669 --- 10.0.0.2 ping statistics --- 00:16:52.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.669 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:16:52.669 22:43:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:52.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:16:52.669 00:16:52.669 --- 10.0.0.1 ping statistics --- 00:16:52.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.669 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:16:52.669 22:43:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.669 22:43:36 -- nvmf/common.sh@410 -- # return 0 00:16:52.669 22:43:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:52.669 22:43:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.669 22:43:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:52.669 22:43:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:52.669 22:43:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.669 22:43:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:52.669 22:43:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:52.669 22:43:36 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:16:52.669 22:43:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:52.669 22:43:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:52.669 22:43:36 -- common/autotest_common.sh@10 -- # set +x 00:16:52.669 22:43:36 -- nvmf/common.sh@469 -- # nvmfpid=1079333 00:16:52.669 22:43:36 -- nvmf/common.sh@470 -- # waitforlisten 1079333 00:16:52.669 22:43:36 -- common/autotest_common.sh@819 -- # '[' -z 1079333 ']' 00:16:52.669 22:43:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.669 22:43:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:52.669 22:43:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.669 22:43:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:52.669 22:43:36 -- common/autotest_common.sh@10 -- # set +x 00:16:52.669 22:43:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:52.669 [2024-04-15 22:43:36.657966] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:52.669 [2024-04-15 22:43:36.658032] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.669 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.669 [2024-04-15 22:43:36.736051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.669 [2024-04-15 22:43:36.807492] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:52.669 [2024-04-15 22:43:36.807625] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.669 [2024-04-15 22:43:36.807635] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.669 [2024-04-15 22:43:36.807642] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.669 [2024-04-15 22:43:36.807659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.669 22:43:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:52.669 22:43:37 -- common/autotest_common.sh@852 -- # return 0 00:16:52.669 22:43:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:52.669 22:43:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:52.669 22:43:37 -- common/autotest_common.sh@10 -- # set +x 00:16:52.669 22:43:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.669 22:43:37 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:52.930 [2024-04-15 22:43:37.583117] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.930 22:43:37 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:16:52.930 22:43:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:52.930 22:43:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:52.930 22:43:37 -- common/autotest_common.sh@10 -- # set +x 00:16:52.930 ************************************ 00:16:52.930 START TEST lvs_grow_clean 00:16:52.930 ************************************ 00:16:52.930 22:43:37 -- common/autotest_common.sh@1104 -- # lvs_grow 00:16:52.930 22:43:37 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:52.930 22:43:37 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:52.930 22:43:37 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:52.930 22:43:37 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:52.930 22:43:37 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:52.930 22:43:37 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:52.930 22:43:37 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:52.930 22:43:37 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:52.930 22:43:37 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:53.190 22:43:37 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:53.190 22:43:37 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:53.190 22:43:37 -- target/nvmf_lvs_grow.sh@28 -- # lvs=bfd2b2f7-b791-4b44-91a2-2a66b4fdfa7c 00:16:53.190 22:43:37 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfd2b2f7-b791-4b44-91a2-2a66b4fdfa7c 00:16:53.190 22:43:37 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:53.450 22:43:38 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:53.450 22:43:38 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:53.450 22:43:38 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bfd2b2f7-b791-4b44-91a2-2a66b4fdfa7c lvol 150 00:16:53.450 22:43:38 -- target/nvmf_lvs_grow.sh@33 -- # lvol=681287e8-e867-465e-b926-2efd3537c7f6 00:16:53.450 22:43:38 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:53.450 22:43:38 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:53.710 [2024-04-15 22:43:38.392579] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:53.710 [2024-04-15 22:43:38.392633] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:53.710 true 00:16:53.710 22:43:38 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfd2b2f7-b791-4b44-91a2-2a66b4fdfa7c 00:16:53.710 22:43:38 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:53.970 22:43:38 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:53.970 22:43:38 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:53.970 22:43:38 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 681287e8-e867-465e-b926-2efd3537c7f6 00:16:54.231 22:43:38 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:54.231 [2024-04-15 22:43:38.962338] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.231 22:43:38 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:54.491 22:43:39 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1079991 00:16:54.491 22:43:39 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:54.491 22:43:39 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1079991 /var/tmp/bdevperf.sock 00:16:54.491 22:43:39 -- common/autotest_common.sh@819 -- # '[' -z 1079991 ']' 00:16:54.491 22:43:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:54.491 22:43:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:54.491 22:43:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:54.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:54.491 22:43:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:54.491 22:43:39 -- common/autotest_common.sh@10 -- # set +x 00:16:54.491 22:43:39 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:54.491 [2024-04-15 22:43:39.174103] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:54.491 [2024-04-15 22:43:39.174200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1079991 ] 00:16:54.491 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.491 [2024-04-15 22:43:39.242863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.752 [2024-04-15 22:43:39.305180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.323 22:43:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:55.323 22:43:39 -- common/autotest_common.sh@852 -- # return 0 00:16:55.323 22:43:39 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:55.584 Nvme0n1 00:16:55.584 22:43:40 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:55.845 [ 00:16:55.845 { 00:16:55.845 "name": "Nvme0n1", 00:16:55.845 "aliases": [ 00:16:55.845 "681287e8-e867-465e-b926-2efd3537c7f6" 00:16:55.845 ], 00:16:55.845 "product_name": "NVMe disk", 00:16:55.845 "block_size": 4096, 00:16:55.845 "num_blocks": 38912, 00:16:55.845 "uuid": "681287e8-e867-465e-b926-2efd3537c7f6", 00:16:55.845 "assigned_rate_limits": { 00:16:55.845 "rw_ios_per_sec": 0, 00:16:55.845 "rw_mbytes_per_sec": 0, 00:16:55.845 "r_mbytes_per_sec": 0, 00:16:55.845 "w_mbytes_per_sec": 0 00:16:55.845 }, 00:16:55.845 "claimed": false, 00:16:55.845 "zoned": false, 00:16:55.845 "supported_io_types": { 00:16:55.845 "read": true, 00:16:55.845 "write": true, 00:16:55.845 "unmap": true, 00:16:55.845 "write_zeroes": true, 00:16:55.845 "flush": true, 00:16:55.845 "reset": true, 00:16:55.845 "compare": true, 00:16:55.845 "compare_and_write": true, 00:16:55.845 "abort": true, 00:16:55.845 "nvme_admin": true, 00:16:55.845 "nvme_io": true 00:16:55.845 }, 00:16:55.845 "driver_specific": { 00:16:55.845 "nvme": [ 00:16:55.845 { 00:16:55.845 "trid": { 00:16:55.845 "trtype": "TCP", 00:16:55.845 "adrfam": "IPv4", 00:16:55.845 "traddr": "10.0.0.2", 00:16:55.845 "trsvcid": "4420", 00:16:55.845 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:55.845 }, 00:16:55.845 "ctrlr_data": { 00:16:55.845 "cntlid": 1, 00:16:55.845 "vendor_id": "0x8086", 00:16:55.845 "model_number": "SPDK bdev Controller", 00:16:55.845 "serial_number": "SPDK0", 00:16:55.845 "firmware_revision": "24.01.1", 00:16:55.845 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:55.845 "oacs": { 00:16:55.845 "security": 0, 00:16:55.845 "format": 0, 00:16:55.845 "firmware": 0, 00:16:55.845 "ns_manage": 0 00:16:55.845 }, 00:16:55.845 "multi_ctrlr": true, 00:16:55.845 "ana_reporting": false 00:16:55.845 }, 00:16:55.845 "vs": { 00:16:55.845 "nvme_version": "1.3" 00:16:55.845 }, 00:16:55.845 "ns_data": { 00:16:55.845 "id": 1, 00:16:55.845 "can_share": true 00:16:55.845 } 00:16:55.845 } 00:16:55.845 ], 00:16:55.845 "mp_policy": "active_passive" 00:16:55.845 } 00:16:55.845 } 00:16:55.845 ] 00:16:55.845 22:43:40 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1080155 00:16:55.845 22:43:40 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:55.845 22:43:40 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:55.845 Running I/O for 10 seconds... 00:16:56.788 Latency(us) 00:16:56.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.788 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:56.788 Nvme0n1 : 1.00 18691.00 73.01 0.00 0.00 0.00 0.00 0.00 00:16:56.788 =================================================================================================================== 00:16:56.788 Total : 18691.00 73.01 0.00 0.00 0.00 0.00 0.00 00:16:56.788 00:16:57.728 22:43:42 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bfd2b2f7-b791-4b44-91a2-2a66b4fdfa7c 00:16:57.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.990 Nvme0n1 : 2.00 18753.50 73.26 0.00 0.00 0.00 0.00 0.00 00:16:57.990 =================================================================================================================== 00:16:57.990 Total : 18753.50 73.26 0.00 0.00 0.00 0.00 0.00 00:16:57.990 00:16:57.990 true 00:16:57.990 22:43:42 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfd2b2f7-b791-4b44-91a2-2a66b4fdfa7c 00:16:57.990 22:43:42 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:57.990 22:43:42 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:57.990 22:43:42 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:57.990 22:43:42 -- target/nvmf_lvs_grow.sh@65 -- # wait 1080155 00:16:58.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.932 Nvme0n1 : 3.00 18798.00 73.43 0.00 0.00 0.00 0.00 0.00 00:16:58.932 =================================================================================================================== 00:16:58.932 Total : 18798.00 73.43 0.00 0.00 0.00 0.00 0.00 00:16:58.932 00:16:59.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.874 Nvme0n1 : 4.00 18816.75 73.50 0.00 0.00 0.00 0.00 0.00 00:16:59.874 =================================================================================================================== 00:16:59.874 Total : 18816.75 73.50 0.00 0.00 0.00 0.00 0.00 00:16:59.874 00:17:00.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.816 Nvme0n1 : 5.00 18842.00 73.60 0.00 0.00 0.00 0.00 0.00 00:17:00.816 =================================================================================================================== 00:17:00.816 Total : 18842.00 73.60 0.00 0.00 0.00 0.00 0.00 00:17:00.816 00:17:01.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.759 Nvme0n1 : 6.00 18859.00 73.67 0.00 0.00 0.00 0.00 0.00 00:17:01.759 =================================================================================================================== 00:17:01.759 Total : 18859.00 73.67 0.00 0.00 0.00 0.00 0.00 00:17:01.759 00:17:03.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.232 Nvme0n1 : 7.00 18880.29 73.75 0.00 0.00 0.00 0.00 0.00 00:17:03.233 =================================================================================================================== 00:17:03.233 Total : 18880.29 73.75 0.00 0.00 0.00 0.00 0.00 00:17:03.233 00:17:03.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.805 Nvme0n1 : 8.00 18889.25 73.79 0.00 0.00 0.00 0.00 0.00 00:17:03.805 =================================================================================================================== 00:17:03.805 Total : 18889.25 73.79 0.00 0.00 0.00 0.00 0.00 00:17:03.805 00:17:05.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.191 Nvme0n1 : 9.00 18902.44 73.84 0.00 0.00 0.00 0.00 0.00 00:17:05.191 =================================================================================================================== 00:17:05.191 Total : 18902.44 73.84 0.00 0.00 0.00 0.00 0.00 00:17:05.191 00:17:05.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.763 Nvme0n1 : 10.00 18912.20 73.88 0.00 0.00 0.00 0.00 0.00 00:17:05.763 =================================================================================================================== 00:17:05.763 Total : 18912.20 73.88 0.00 0.00 0.00 0.00 0.00 00:17:05.763 00:17:06.024 00:17:06.024 Latency(us) 00:17:06.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.024 Nvme0n1 : 10.00 18911.29 73.87 0.00 0.00 6763.73 2075.31 9830.40 00:17:06.024 =================================================================================================================== 00:17:06.024 Total : 18911.29 73.87 0.00 0.00 6763.73 2075.31 9830.40 00:17:06.024 0 00:17:06.024 22:43:50 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1079991 00:17:06.024 22:43:50 -- common/autotest_common.sh@926 -- # '[' -z 1079991 ']' 00:17:06.024 22:43:50 -- common/autotest_common.sh@930 -- # kill -0 1079991 00:17:06.024 22:43:50 -- common/autotest_common.sh@931 -- # uname 00:17:06.024 22:43:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:06.024 22:43:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1079991 00:17:06.024 22:43:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:06.024 22:43:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:06.024 22:43:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1079991' 00:17:06.024 killing process with pid 1079991 00:17:06.024 22:43:50 -- common/autotest_common.sh@945 -- # kill 1079991 00:17:06.024 Received shutdown signal, test time was about 10.000000 seconds 00:17:06.024 00:17:06.024 Latency(us) 00:17:06.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.024 =================================================================================================================== 00:17:06.024 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:06.024 22:43:50 -- common/autotest_common.sh@950 -- # wait 1079991 00:17:06.024 22:43:50 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:06.285 22:43:50 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfd2b2f7-b791-4b44-91a2-2a66b4fdfa7c 00:17:06.285 22:43:50 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:06.546 22:43:51 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:06.546 22:43:51 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:06.546 22:43:51 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:06.547 [2024-04-15 22:43:51.250683] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:06.547 22:43:51 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfd2b2f7-b791-4b44-91a2-2a66b4fdfa7c 00:17:06.547 22:43:51 -- common/autotest_common.sh@640 -- # local es=0 00:17:06.547 22:43:51 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfd2b2f7-b791-4b44-91a2-2a66b4fdfa7c 00:17:06.547 22:43:51 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.547 22:43:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:06.547 22:43:51 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.547 22:43:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:06.547 22:43:51 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.547 22:43:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:06.547 22:43:51 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.547 22:43:51 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:06.547 22:43:51 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfd2b2f7-b791-4b44-91a2-2a66b4fdfa7c 00:17:06.807 request: 00:17:06.807 { 00:17:06.807 "uuid": "bfd2b2f7-b791-4b44-91a2-2a66b4fdfa7c", 00:17:06.807 "method": "bdev_lvol_get_lvstores", 00:17:06.807 "req_id": 1 00:17:06.807 } 00:17:06.807 Got JSON-RPC error response 00:17:06.807 response: 00:17:06.807 { 00:17:06.807 "code": -19, 00:17:06.807 "message": "No such device" 00:17:06.807 } 00:17:06.807 22:43:51 -- common/autotest_common.sh@643 -- # es=1 00:17:06.807 22:43:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:06.807 22:43:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:06.807 22:43:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:06.807 22:43:51 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:07.068 aio_bdev 00:17:07.068 22:43:51 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 681287e8-e867-465e-b926-2efd3537c7f6 00:17:07.068 22:43:51 -- common/autotest_common.sh@887 -- # local bdev_name=681287e8-e867-465e-b926-2efd3537c7f6 00:17:07.068 22:43:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:07.068 22:43:51 -- common/autotest_common.sh@889 -- # local i 00:17:07.068 22:43:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:07.068 22:43:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:07.068 22:43:51 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:07.068 22:43:51 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 681287e8-e867-465e-b926-2efd3537c7f6 -t 2000 00:17:07.329 [ 00:17:07.329 { 00:17:07.329 "name": "681287e8-e867-465e-b926-2efd3537c7f6", 00:17:07.329 "aliases": [ 00:17:07.329 "lvs/lvol" 00:17:07.329 ], 00:17:07.329 "product_name": "Logical Volume", 00:17:07.329 "block_size": 4096, 00:17:07.329 "num_blocks": 38912, 00:17:07.329 "uuid": "681287e8-e867-465e-b926-2efd3537c7f6", 00:17:07.329 "assigned_rate_limits": { 00:17:07.329 "rw_ios_per_sec": 0, 00:17:07.329 "rw_mbytes_per_sec": 0, 00:17:07.329 "r_mbytes_per_sec": 0, 00:17:07.329 "w_mbytes_per_sec": 0 00:17:07.329 }, 00:17:07.329 "claimed": false, 00:17:07.329 "zoned": false, 00:17:07.329 "supported_io_types": { 00:17:07.329 "read": true, 00:17:07.329 "write": true, 00:17:07.329 "unmap": true, 00:17:07.329 "write_zeroes": true, 00:17:07.329 "flush": false, 00:17:07.329 "reset": true, 00:17:07.329 "compare": false, 00:17:07.329 "compare_and_write": false, 00:17:07.329 "abort": false, 00:17:07.329 "nvme_admin": false, 00:17:07.329 "nvme_io": false 00:17:07.329 }, 00:17:07.329 "driver_specific": { 00:17:07.329 "lvol": { 00:17:07.329 "lvol_store_uuid": "bfd2b2f7-b791-4b44-91a2-2a66b4fdfa7c", 00:17:07.329 "base_bdev": "aio_bdev", 00:17:07.329 "thin_provision": false, 00:17:07.329 "snapshot": false, 00:17:07.329 "clone": false, 00:17:07.329 "esnap_clone": false 00:17:07.329 } 00:17:07.329 } 00:17:07.329 } 00:17:07.329 ] 00:17:07.329 22:43:51 -- common/autotest_common.sh@895 -- # return 0 00:17:07.329 22:43:51 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfd2b2f7-b791-4b44-91a2-2a66b4fdfa7c 00:17:07.329 22:43:51 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:07.329 22:43:52 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:07.329 22:43:52 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfd2b2f7-b791-4b44-91a2-2a66b4fdfa7c 00:17:07.329 22:43:52 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:07.602 22:43:52 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:07.602 22:43:52 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 681287e8-e867-465e-b926-2efd3537c7f6 00:17:07.602 22:43:52 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bfd2b2f7-b791-4b44-91a2-2a66b4fdfa7c 00:17:07.865 22:43:52 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:08.126 22:43:52 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:08.126 00:17:08.126 real 0m15.150s 00:17:08.126 user 0m14.913s 00:17:08.126 sys 0m1.182s 00:17:08.126 22:43:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:08.126 22:43:52 -- common/autotest_common.sh@10 -- # set +x 00:17:08.126 ************************************ 00:17:08.126 END TEST lvs_grow_clean 00:17:08.126 ************************************ 00:17:08.126 22:43:52 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:08.126 22:43:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:08.126 22:43:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:08.126 22:43:52 -- common/autotest_common.sh@10 -- # set +x 00:17:08.126 ************************************ 00:17:08.126 START TEST lvs_grow_dirty 00:17:08.126 ************************************ 00:17:08.126 22:43:52 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:17:08.126 22:43:52 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:08.126 22:43:52 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:08.126 22:43:52 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:08.126 22:43:52 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:08.126 22:43:52 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:08.126 22:43:52 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:08.126 22:43:52 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:08.126 22:43:52 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:08.126 22:43:52 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:08.387 22:43:53 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:08.387 22:43:53 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:08.387 22:43:53 -- target/nvmf_lvs_grow.sh@28 -- # lvs=5fc2d44a-056c-4e7c-9321-dcb867e26c96 00:17:08.387 22:43:53 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fc2d44a-056c-4e7c-9321-dcb867e26c96 00:17:08.387 22:43:53 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:08.647 22:43:53 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:08.647 22:43:53 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:08.647 22:43:53 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5fc2d44a-056c-4e7c-9321-dcb867e26c96 lvol 150 00:17:08.908 22:43:53 -- target/nvmf_lvs_grow.sh@33 -- # lvol=7f09cf50-934f-43ac-8509-557ac0a7b5b4 00:17:08.908 22:43:53 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:08.908 22:43:53 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:08.908 [2024-04-15 22:43:53.598048] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:08.908 [2024-04-15 22:43:53.598099] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:08.908 true 00:17:08.908 22:43:53 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fc2d44a-056c-4e7c-9321-dcb867e26c96 00:17:08.908 22:43:53 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:09.169 22:43:53 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:09.169 22:43:53 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:09.169 22:43:53 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7f09cf50-934f-43ac-8509-557ac0a7b5b4 00:17:09.429 22:43:54 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:09.429 22:43:54 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:09.691 22:43:54 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1082979 00:17:09.691 22:43:54 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:09.691 22:43:54 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:09.691 22:43:54 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1082979 /var/tmp/bdevperf.sock 00:17:09.691 22:43:54 -- common/autotest_common.sh@819 -- # '[' -z 1082979 ']' 00:17:09.691 22:43:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:09.691 22:43:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:09.691 22:43:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:09.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:09.691 22:43:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:09.691 22:43:54 -- common/autotest_common.sh@10 -- # set +x 00:17:09.691 [2024-04-15 22:43:54.384467] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:09.691 [2024-04-15 22:43:54.384519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1082979 ] 00:17:09.691 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.691 [2024-04-15 22:43:54.449690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.952 [2024-04-15 22:43:54.511894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.522 22:43:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:10.523 22:43:55 -- common/autotest_common.sh@852 -- # return 0 00:17:10.523 22:43:55 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:10.783 Nvme0n1 00:17:10.783 22:43:55 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:11.044 [ 00:17:11.044 { 00:17:11.044 "name": "Nvme0n1", 00:17:11.044 "aliases": [ 00:17:11.044 "7f09cf50-934f-43ac-8509-557ac0a7b5b4" 00:17:11.044 ], 00:17:11.044 "product_name": "NVMe disk", 00:17:11.044 "block_size": 4096, 00:17:11.044 "num_blocks": 38912, 00:17:11.044 "uuid": "7f09cf50-934f-43ac-8509-557ac0a7b5b4", 00:17:11.044 "assigned_rate_limits": { 00:17:11.044 "rw_ios_per_sec": 0, 00:17:11.044 "rw_mbytes_per_sec": 0, 00:17:11.044 "r_mbytes_per_sec": 0, 00:17:11.044 "w_mbytes_per_sec": 0 00:17:11.044 }, 00:17:11.044 "claimed": false, 00:17:11.044 "zoned": false, 00:17:11.044 "supported_io_types": { 00:17:11.044 "read": true, 00:17:11.044 "write": true, 00:17:11.044 "unmap": true, 00:17:11.044 "write_zeroes": true, 00:17:11.044 "flush": true, 00:17:11.044 "reset": true, 00:17:11.044 "compare": true, 00:17:11.044 "compare_and_write": true, 00:17:11.044 "abort": true, 00:17:11.044 "nvme_admin": true, 00:17:11.044 "nvme_io": true 00:17:11.044 }, 00:17:11.044 "driver_specific": { 00:17:11.044 "nvme": [ 00:17:11.044 { 00:17:11.044 "trid": { 00:17:11.044 "trtype": "TCP", 00:17:11.044 "adrfam": "IPv4", 00:17:11.044 "traddr": "10.0.0.2", 00:17:11.044 "trsvcid": "4420", 00:17:11.044 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:11.044 }, 00:17:11.044 "ctrlr_data": { 00:17:11.044 "cntlid": 1, 00:17:11.044 "vendor_id": "0x8086", 00:17:11.044 "model_number": "SPDK bdev Controller", 00:17:11.044 "serial_number": "SPDK0", 00:17:11.044 "firmware_revision": "24.01.1", 00:17:11.044 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:11.044 "oacs": { 00:17:11.044 "security": 0, 00:17:11.044 "format": 0, 00:17:11.044 "firmware": 0, 00:17:11.044 "ns_manage": 0 00:17:11.044 }, 00:17:11.044 "multi_ctrlr": true, 00:17:11.044 "ana_reporting": false 00:17:11.044 }, 00:17:11.044 "vs": { 00:17:11.044 "nvme_version": "1.3" 00:17:11.044 }, 00:17:11.044 "ns_data": { 00:17:11.044 "id": 1, 00:17:11.044 "can_share": true 00:17:11.044 } 00:17:11.044 } 00:17:11.044 ], 00:17:11.044 "mp_policy": "active_passive" 00:17:11.044 } 00:17:11.044 } 00:17:11.044 ] 00:17:11.044 22:43:55 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1083181 00:17:11.044 22:43:55 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:11.044 22:43:55 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:11.044 Running I/O for 10 seconds... 00:17:12.430 Latency(us) 00:17:12.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.430 Nvme0n1 : 1.00 18563.00 72.51 0.00 0.00 0.00 0.00 0.00 00:17:12.430 =================================================================================================================== 00:17:12.430 Total : 18563.00 72.51 0.00 0.00 0.00 0.00 0.00 00:17:12.430 00:17:13.003 22:43:57 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5fc2d44a-056c-4e7c-9321-dcb867e26c96 00:17:13.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.003 Nvme0n1 : 2.00 18689.50 73.01 0.00 0.00 0.00 0.00 0.00 00:17:13.003 =================================================================================================================== 00:17:13.003 Total : 18689.50 73.01 0.00 0.00 0.00 0.00 0.00 00:17:13.003 00:17:13.264 true 00:17:13.264 22:43:57 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fc2d44a-056c-4e7c-9321-dcb867e26c96 00:17:13.264 22:43:57 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:13.264 22:43:58 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:13.264 22:43:58 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:13.264 22:43:58 -- target/nvmf_lvs_grow.sh@65 -- # wait 1083181 00:17:14.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.207 Nvme0n1 : 3.00 18755.33 73.26 0.00 0.00 0.00 0.00 0.00 00:17:14.207 =================================================================================================================== 00:17:14.207 Total : 18755.33 73.26 0.00 0.00 0.00 0.00 0.00 00:17:14.207 00:17:15.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.152 Nvme0n1 : 4.00 18802.50 73.45 0.00 0.00 0.00 0.00 0.00 00:17:15.152 =================================================================================================================== 00:17:15.152 Total : 18802.50 73.45 0.00 0.00 0.00 0.00 0.00 00:17:15.152 00:17:16.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.092 Nvme0n1 : 5.00 18830.80 73.56 0.00 0.00 0.00 0.00 0.00 00:17:16.092 =================================================================================================================== 00:17:16.092 Total : 18830.80 73.56 0.00 0.00 0.00 0.00 0.00 00:17:16.092 00:17:17.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.035 Nvme0n1 : 6.00 18849.67 73.63 0.00 0.00 0.00 0.00 0.00 00:17:17.035 =================================================================================================================== 00:17:17.035 Total : 18849.67 73.63 0.00 0.00 0.00 0.00 0.00 00:17:17.035 00:17:18.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.419 Nvme0n1 : 7.00 18872.29 73.72 0.00 0.00 0.00 0.00 0.00 00:17:18.419 =================================================================================================================== 00:17:18.419 Total : 18872.29 73.72 0.00 0.00 0.00 0.00 0.00 00:17:18.419 00:17:19.362 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.362 Nvme0n1 : 8.00 18888.38 73.78 0.00 0.00 0.00 0.00 0.00 00:17:19.362 =================================================================================================================== 00:17:19.362 Total : 18888.38 73.78 0.00 0.00 0.00 0.00 0.00 00:17:19.362 00:17:20.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.306 Nvme0n1 : 9.00 18902.44 73.84 0.00 0.00 0.00 0.00 0.00 00:17:20.306 =================================================================================================================== 00:17:20.307 Total : 18902.44 73.84 0.00 0.00 0.00 0.00 0.00 00:17:20.307 00:17:21.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.250 Nvme0n1 : 10.00 18912.30 73.88 0.00 0.00 0.00 0.00 0.00 00:17:21.250 =================================================================================================================== 00:17:21.250 Total : 18912.30 73.88 0.00 0.00 0.00 0.00 0.00 00:17:21.250 00:17:21.250 00:17:21.250 Latency(us) 00:17:21.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.250 Nvme0n1 : 10.01 18909.31 73.86 0.00 0.00 6764.52 4478.29 16165.55 00:17:21.250 =================================================================================================================== 00:17:21.250 Total : 18909.31 73.86 0.00 0.00 6764.52 4478.29 16165.55 00:17:21.250 0 00:17:21.250 22:44:05 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1082979 00:17:21.250 22:44:05 -- common/autotest_common.sh@926 -- # '[' -z 1082979 ']' 00:17:21.250 22:44:05 -- common/autotest_common.sh@930 -- # kill -0 1082979 00:17:21.250 22:44:05 -- common/autotest_common.sh@931 -- # uname 00:17:21.250 22:44:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:21.250 22:44:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1082979 00:17:21.250 22:44:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:21.250 22:44:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:21.250 22:44:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1082979' 00:17:21.250 killing process with pid 1082979 00:17:21.250 22:44:05 -- common/autotest_common.sh@945 -- # kill 1082979 00:17:21.250 Received shutdown signal, test time was about 10.000000 seconds 00:17:21.250 00:17:21.250 Latency(us) 00:17:21.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.250 =================================================================================================================== 00:17:21.250 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:21.250 22:44:05 -- common/autotest_common.sh@950 -- # wait 1082979 00:17:21.250 22:44:06 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:21.510 22:44:06 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fc2d44a-056c-4e7c-9321-dcb867e26c96 00:17:21.510 22:44:06 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:21.771 22:44:06 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:21.771 22:44:06 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:17:21.771 22:44:06 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1079333 00:17:21.771 22:44:06 -- target/nvmf_lvs_grow.sh@74 -- # wait 1079333 00:17:21.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1079333 Killed "${NVMF_APP[@]}" "$@" 00:17:21.771 22:44:06 -- target/nvmf_lvs_grow.sh@74 -- # true 00:17:21.771 22:44:06 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:17:21.771 22:44:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:21.771 22:44:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:21.771 22:44:06 -- common/autotest_common.sh@10 -- # set +x 00:17:21.771 22:44:06 -- nvmf/common.sh@469 -- # nvmfpid=1085357 00:17:21.771 22:44:06 -- nvmf/common.sh@470 -- # waitforlisten 1085357 00:17:21.771 22:44:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:21.771 22:44:06 -- common/autotest_common.sh@819 -- # '[' -z 1085357 ']' 00:17:21.771 22:44:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.771 22:44:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:21.771 22:44:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.771 22:44:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:21.771 22:44:06 -- common/autotest_common.sh@10 -- # set +x 00:17:21.771 [2024-04-15 22:44:06.477285] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:21.771 [2024-04-15 22:44:06.477343] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.771 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.771 [2024-04-15 22:44:06.551514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.031 [2024-04-15 22:44:06.613882] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:22.032 [2024-04-15 22:44:06.614003] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.032 [2024-04-15 22:44:06.614012] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.032 [2024-04-15 22:44:06.614019] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.032 [2024-04-15 22:44:06.614047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.603 22:44:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:22.603 22:44:07 -- common/autotest_common.sh@852 -- # return 0 00:17:22.603 22:44:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:22.603 22:44:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:22.603 22:44:07 -- common/autotest_common.sh@10 -- # set +x 00:17:22.603 22:44:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.603 22:44:07 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:22.862 [2024-04-15 22:44:07.419310] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:22.862 [2024-04-15 22:44:07.419404] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:22.862 [2024-04-15 22:44:07.419434] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:22.862 22:44:07 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:17:22.862 22:44:07 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 7f09cf50-934f-43ac-8509-557ac0a7b5b4 00:17:22.862 22:44:07 -- common/autotest_common.sh@887 -- # local bdev_name=7f09cf50-934f-43ac-8509-557ac0a7b5b4 00:17:22.862 22:44:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:22.862 22:44:07 -- common/autotest_common.sh@889 -- # local i 00:17:22.862 22:44:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:22.862 22:44:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:22.862 22:44:07 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:22.862 22:44:07 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7f09cf50-934f-43ac-8509-557ac0a7b5b4 -t 2000 00:17:23.123 [ 00:17:23.123 { 00:17:23.123 "name": "7f09cf50-934f-43ac-8509-557ac0a7b5b4", 00:17:23.123 "aliases": [ 00:17:23.123 "lvs/lvol" 00:17:23.123 ], 00:17:23.123 "product_name": "Logical Volume", 00:17:23.123 "block_size": 4096, 00:17:23.123 "num_blocks": 38912, 00:17:23.123 "uuid": "7f09cf50-934f-43ac-8509-557ac0a7b5b4", 00:17:23.123 "assigned_rate_limits": { 00:17:23.123 "rw_ios_per_sec": 0, 00:17:23.123 "rw_mbytes_per_sec": 0, 00:17:23.123 "r_mbytes_per_sec": 0, 00:17:23.123 "w_mbytes_per_sec": 0 00:17:23.123 }, 00:17:23.123 "claimed": false, 00:17:23.123 "zoned": false, 00:17:23.123 "supported_io_types": { 00:17:23.123 "read": true, 00:17:23.123 "write": true, 00:17:23.123 "unmap": true, 00:17:23.123 "write_zeroes": true, 00:17:23.123 "flush": false, 00:17:23.123 "reset": true, 00:17:23.123 "compare": false, 00:17:23.123 "compare_and_write": false, 00:17:23.123 "abort": false, 00:17:23.123 "nvme_admin": false, 00:17:23.123 "nvme_io": false 00:17:23.123 }, 00:17:23.123 "driver_specific": { 00:17:23.123 "lvol": { 00:17:23.123 "lvol_store_uuid": "5fc2d44a-056c-4e7c-9321-dcb867e26c96", 00:17:23.123 "base_bdev": "aio_bdev", 00:17:23.123 "thin_provision": false, 00:17:23.123 "snapshot": false, 00:17:23.123 "clone": false, 00:17:23.123 "esnap_clone": false 00:17:23.123 } 00:17:23.123 } 00:17:23.123 } 00:17:23.123 ] 00:17:23.123 22:44:07 -- common/autotest_common.sh@895 -- # return 0 00:17:23.123 22:44:07 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fc2d44a-056c-4e7c-9321-dcb867e26c96 00:17:23.123 22:44:07 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:17:23.123 22:44:07 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:17:23.123 22:44:07 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fc2d44a-056c-4e7c-9321-dcb867e26c96 00:17:23.123 22:44:07 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:17:23.383 22:44:08 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:17:23.383 22:44:08 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:23.383 [2024-04-15 22:44:08.191274] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:23.645 22:44:08 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fc2d44a-056c-4e7c-9321-dcb867e26c96 00:17:23.645 22:44:08 -- common/autotest_common.sh@640 -- # local es=0 00:17:23.645 22:44:08 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fc2d44a-056c-4e7c-9321-dcb867e26c96 00:17:23.645 22:44:08 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.645 22:44:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:23.645 22:44:08 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.645 22:44:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:23.645 22:44:08 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.645 22:44:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:23.645 22:44:08 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.645 22:44:08 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:23.645 22:44:08 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fc2d44a-056c-4e7c-9321-dcb867e26c96 00:17:23.645 request: 00:17:23.645 { 00:17:23.645 "uuid": "5fc2d44a-056c-4e7c-9321-dcb867e26c96", 00:17:23.645 "method": "bdev_lvol_get_lvstores", 00:17:23.645 "req_id": 1 00:17:23.645 } 00:17:23.645 Got JSON-RPC error response 00:17:23.645 response: 00:17:23.645 { 00:17:23.645 "code": -19, 00:17:23.645 "message": "No such device" 00:17:23.645 } 00:17:23.645 22:44:08 -- common/autotest_common.sh@643 -- # es=1 00:17:23.645 22:44:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:23.645 22:44:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:23.645 22:44:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:23.645 22:44:08 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:23.936 aio_bdev 00:17:23.936 22:44:08 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 7f09cf50-934f-43ac-8509-557ac0a7b5b4 00:17:23.936 22:44:08 -- common/autotest_common.sh@887 -- # local bdev_name=7f09cf50-934f-43ac-8509-557ac0a7b5b4 00:17:23.936 22:44:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:23.936 22:44:08 -- common/autotest_common.sh@889 -- # local i 00:17:23.936 22:44:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:23.936 22:44:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:23.936 22:44:08 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:23.936 22:44:08 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7f09cf50-934f-43ac-8509-557ac0a7b5b4 -t 2000 00:17:24.197 [ 00:17:24.197 { 00:17:24.197 "name": "7f09cf50-934f-43ac-8509-557ac0a7b5b4", 00:17:24.197 "aliases": [ 00:17:24.197 "lvs/lvol" 00:17:24.197 ], 00:17:24.197 "product_name": "Logical Volume", 00:17:24.197 "block_size": 4096, 00:17:24.197 "num_blocks": 38912, 00:17:24.197 "uuid": "7f09cf50-934f-43ac-8509-557ac0a7b5b4", 00:17:24.197 "assigned_rate_limits": { 00:17:24.197 "rw_ios_per_sec": 0, 00:17:24.197 "rw_mbytes_per_sec": 0, 00:17:24.197 "r_mbytes_per_sec": 0, 00:17:24.197 "w_mbytes_per_sec": 0 00:17:24.197 }, 00:17:24.197 "claimed": false, 00:17:24.197 "zoned": false, 00:17:24.197 "supported_io_types": { 00:17:24.197 "read": true, 00:17:24.197 "write": true, 00:17:24.197 "unmap": true, 00:17:24.197 "write_zeroes": true, 00:17:24.197 "flush": false, 00:17:24.197 "reset": true, 00:17:24.197 "compare": false, 00:17:24.197 "compare_and_write": false, 00:17:24.197 "abort": false, 00:17:24.197 "nvme_admin": false, 00:17:24.197 "nvme_io": false 00:17:24.197 }, 00:17:24.197 "driver_specific": { 00:17:24.197 "lvol": { 00:17:24.197 "lvol_store_uuid": "5fc2d44a-056c-4e7c-9321-dcb867e26c96", 00:17:24.197 "base_bdev": "aio_bdev", 00:17:24.197 "thin_provision": false, 00:17:24.197 "snapshot": false, 00:17:24.197 "clone": false, 00:17:24.197 "esnap_clone": false 00:17:24.197 } 00:17:24.197 } 00:17:24.197 } 00:17:24.197 ] 00:17:24.197 22:44:08 -- common/autotest_common.sh@895 -- # return 0 00:17:24.197 22:44:08 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fc2d44a-056c-4e7c-9321-dcb867e26c96 00:17:24.197 22:44:08 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:24.197 22:44:08 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:24.197 22:44:08 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:24.197 22:44:08 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fc2d44a-056c-4e7c-9321-dcb867e26c96 00:17:24.456 22:44:09 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:24.456 22:44:09 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7f09cf50-934f-43ac-8509-557ac0a7b5b4 00:17:24.717 22:44:09 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5fc2d44a-056c-4e7c-9321-dcb867e26c96 00:17:24.717 22:44:09 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:24.978 22:44:09 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:24.978 00:17:24.978 real 0m16.844s 00:17:24.978 user 0m43.906s 00:17:24.978 sys 0m2.875s 00:17:24.978 22:44:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:24.978 22:44:09 -- common/autotest_common.sh@10 -- # set +x 00:17:24.978 ************************************ 00:17:24.978 END TEST lvs_grow_dirty 00:17:24.978 ************************************ 00:17:24.978 22:44:09 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:24.978 22:44:09 -- common/autotest_common.sh@796 -- # type=--id 00:17:24.978 22:44:09 -- common/autotest_common.sh@797 -- # id=0 00:17:24.978 22:44:09 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:24.978 22:44:09 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:24.978 22:44:09 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:24.978 22:44:09 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:24.978 22:44:09 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:24.978 22:44:09 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:24.978 nvmf_trace.0 00:17:24.978 22:44:09 -- common/autotest_common.sh@811 -- # return 0 00:17:24.978 22:44:09 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:24.978 22:44:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:24.978 22:44:09 -- nvmf/common.sh@116 -- # sync 00:17:24.978 22:44:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:24.978 22:44:09 -- nvmf/common.sh@119 -- # set +e 00:17:24.978 22:44:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:24.978 22:44:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:24.978 rmmod nvme_tcp 00:17:24.978 rmmod nvme_fabrics 00:17:24.978 rmmod nvme_keyring 00:17:25.238 22:44:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:25.238 22:44:09 -- nvmf/common.sh@123 -- # set -e 00:17:25.238 22:44:09 -- nvmf/common.sh@124 -- # return 0 00:17:25.238 22:44:09 -- nvmf/common.sh@477 -- # '[' -n 1085357 ']' 00:17:25.238 22:44:09 -- nvmf/common.sh@478 -- # killprocess 1085357 00:17:25.238 22:44:09 -- common/autotest_common.sh@926 -- # '[' -z 1085357 ']' 00:17:25.238 22:44:09 -- common/autotest_common.sh@930 -- # kill -0 1085357 00:17:25.238 22:44:09 -- common/autotest_common.sh@931 -- # uname 00:17:25.238 22:44:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:25.238 22:44:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1085357 00:17:25.238 22:44:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:25.238 22:44:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:25.238 22:44:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1085357' 00:17:25.238 killing process with pid 1085357 00:17:25.238 22:44:09 -- common/autotest_common.sh@945 -- # kill 1085357 00:17:25.238 22:44:09 -- common/autotest_common.sh@950 -- # wait 1085357 00:17:25.238 22:44:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:25.238 22:44:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:25.238 22:44:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:25.238 22:44:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:25.238 22:44:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:25.238 22:44:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.238 22:44:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.238 22:44:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.784 22:44:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:27.784 00:17:27.784 real 0m43.120s 00:17:27.784 user 1m4.670s 00:17:27.784 sys 0m10.052s 00:17:27.784 22:44:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:27.784 22:44:12 -- common/autotest_common.sh@10 -- # set +x 00:17:27.784 ************************************ 00:17:27.784 END TEST nvmf_lvs_grow 00:17:27.784 ************************************ 00:17:27.784 22:44:12 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:27.784 22:44:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:27.784 22:44:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:27.784 22:44:12 -- common/autotest_common.sh@10 -- # set +x 00:17:27.784 ************************************ 00:17:27.784 START TEST nvmf_bdev_io_wait 00:17:27.784 ************************************ 00:17:27.784 22:44:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:27.784 * Looking for test storage... 00:17:27.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:27.784 22:44:12 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.784 22:44:12 -- nvmf/common.sh@7 -- # uname -s 00:17:27.784 22:44:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.784 22:44:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.784 22:44:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.784 22:44:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.784 22:44:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.784 22:44:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.784 22:44:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.784 22:44:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.784 22:44:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.784 22:44:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.784 22:44:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:27.784 22:44:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:27.784 22:44:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.784 22:44:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.784 22:44:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.784 22:44:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:27.784 22:44:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.784 22:44:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.784 22:44:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.784 22:44:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.784 22:44:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.784 22:44:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.784 22:44:12 -- paths/export.sh@5 -- # export PATH 00:17:27.784 22:44:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.784 22:44:12 -- nvmf/common.sh@46 -- # : 0 00:17:27.784 22:44:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:27.784 22:44:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:27.784 22:44:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:27.784 22:44:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.784 22:44:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.784 22:44:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:27.784 22:44:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:27.784 22:44:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:27.784 22:44:12 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:27.784 22:44:12 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:27.784 22:44:12 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:27.784 22:44:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:27.784 22:44:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.784 22:44:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:27.784 22:44:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:27.785 22:44:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:27.785 22:44:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.785 22:44:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.785 22:44:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.785 22:44:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:27.785 22:44:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:27.785 22:44:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:27.785 22:44:12 -- common/autotest_common.sh@10 -- # set +x 00:17:35.923 22:44:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:35.923 22:44:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:35.923 22:44:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:35.923 22:44:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:35.923 22:44:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:35.923 22:44:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:35.923 22:44:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:35.923 22:44:19 -- nvmf/common.sh@294 -- # net_devs=() 00:17:35.923 22:44:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:35.923 22:44:19 -- nvmf/common.sh@295 -- # e810=() 00:17:35.923 22:44:19 -- nvmf/common.sh@295 -- # local -ga e810 00:17:35.923 22:44:19 -- nvmf/common.sh@296 -- # x722=() 00:17:35.923 22:44:19 -- nvmf/common.sh@296 -- # local -ga x722 00:17:35.923 22:44:19 -- nvmf/common.sh@297 -- # mlx=() 00:17:35.923 22:44:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:35.923 22:44:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:35.923 22:44:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:35.923 22:44:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:35.923 22:44:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:35.923 22:44:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:35.923 22:44:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:35.923 22:44:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:35.923 22:44:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:35.923 22:44:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:35.923 22:44:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:35.923 22:44:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:35.923 22:44:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:35.923 22:44:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:35.923 22:44:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:35.923 22:44:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:35.923 22:44:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:35.923 22:44:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:35.923 22:44:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:35.923 22:44:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:35.923 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:35.923 22:44:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:35.923 22:44:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:35.923 22:44:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.923 22:44:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.923 22:44:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:35.924 22:44:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:35.924 22:44:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:35.924 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:35.924 22:44:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:35.924 22:44:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:35.924 22:44:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.924 22:44:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.924 22:44:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:35.924 22:44:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:35.924 22:44:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:35.924 22:44:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:35.924 22:44:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:35.924 22:44:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.924 22:44:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:35.924 22:44:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.924 22:44:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:35.924 Found net devices under 0000:31:00.0: cvl_0_0 00:17:35.924 22:44:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.924 22:44:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:35.924 22:44:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.924 22:44:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:35.924 22:44:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.924 22:44:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:35.924 Found net devices under 0000:31:00.1: cvl_0_1 00:17:35.924 22:44:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.924 22:44:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:35.924 22:44:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:35.924 22:44:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:35.924 22:44:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:35.924 22:44:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:35.924 22:44:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.924 22:44:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.924 22:44:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:35.924 22:44:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:35.924 22:44:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:35.924 22:44:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:35.924 22:44:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:35.924 22:44:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:35.924 22:44:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.924 22:44:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:35.924 22:44:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:35.924 22:44:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:35.924 22:44:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:35.924 22:44:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:35.924 22:44:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:35.924 22:44:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:35.924 22:44:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:35.924 22:44:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:35.924 22:44:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:35.924 22:44:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:35.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:17:35.924 00:17:35.924 --- 10.0.0.2 ping statistics --- 00:17:35.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.924 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:17:35.924 22:44:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:35.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:17:35.924 00:17:35.924 --- 10.0.0.1 ping statistics --- 00:17:35.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.924 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:17:35.924 22:44:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.924 22:44:20 -- nvmf/common.sh@410 -- # return 0 00:17:35.924 22:44:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:35.924 22:44:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.924 22:44:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:35.924 22:44:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:35.924 22:44:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.924 22:44:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:35.924 22:44:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:35.924 22:44:20 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:35.924 22:44:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:35.924 22:44:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:35.924 22:44:20 -- common/autotest_common.sh@10 -- # set +x 00:17:35.924 22:44:20 -- nvmf/common.sh@469 -- # nvmfpid=1090726 00:17:35.924 22:44:20 -- nvmf/common.sh@470 -- # waitforlisten 1090726 00:17:35.924 22:44:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:35.924 22:44:20 -- common/autotest_common.sh@819 -- # '[' -z 1090726 ']' 00:17:35.924 22:44:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.924 22:44:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:35.924 22:44:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.924 22:44:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:35.924 22:44:20 -- common/autotest_common.sh@10 -- # set +x 00:17:35.924 [2024-04-15 22:44:20.302396] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:35.924 [2024-04-15 22:44:20.302464] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.924 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.924 [2024-04-15 22:44:20.383659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:35.924 [2024-04-15 22:44:20.459131] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:35.924 [2024-04-15 22:44:20.459273] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.924 [2024-04-15 22:44:20.459283] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.924 [2024-04-15 22:44:20.459292] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.924 [2024-04-15 22:44:20.459414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.924 [2024-04-15 22:44:20.459528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.924 [2024-04-15 22:44:20.459676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.924 [2024-04-15 22:44:20.459676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:36.501 22:44:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:36.501 22:44:21 -- common/autotest_common.sh@852 -- # return 0 00:17:36.501 22:44:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:36.501 22:44:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:36.501 22:44:21 -- common/autotest_common.sh@10 -- # set +x 00:17:36.501 22:44:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.501 22:44:21 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:36.501 22:44:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:36.501 22:44:21 -- common/autotest_common.sh@10 -- # set +x 00:17:36.501 22:44:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:36.501 22:44:21 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:36.501 22:44:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:36.501 22:44:21 -- common/autotest_common.sh@10 -- # set +x 00:17:36.501 22:44:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:36.501 22:44:21 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:36.501 22:44:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:36.501 22:44:21 -- common/autotest_common.sh@10 -- # set +x 00:17:36.501 [2024-04-15 22:44:21.187912] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.501 22:44:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:36.501 22:44:21 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:36.501 22:44:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:36.501 22:44:21 -- common/autotest_common.sh@10 -- # set +x 00:17:36.501 Malloc0 00:17:36.501 22:44:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:36.501 22:44:21 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:36.501 22:44:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:36.501 22:44:21 -- common/autotest_common.sh@10 -- # set +x 00:17:36.501 22:44:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:36.501 22:44:21 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:36.501 22:44:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:36.501 22:44:21 -- common/autotest_common.sh@10 -- # set +x 00:17:36.501 22:44:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:36.501 22:44:21 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:36.501 22:44:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:36.501 22:44:21 -- common/autotest_common.sh@10 -- # set +x 00:17:36.501 [2024-04-15 22:44:21.254852] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.501 22:44:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:36.501 22:44:21 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1091013 00:17:36.501 22:44:21 -- target/bdev_io_wait.sh@30 -- # READ_PID=1091015 00:17:36.501 22:44:21 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:36.501 22:44:21 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:36.501 22:44:21 -- nvmf/common.sh@520 -- # config=() 00:17:36.501 22:44:21 -- nvmf/common.sh@520 -- # local subsystem config 00:17:36.501 22:44:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:36.501 22:44:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:36.501 { 00:17:36.501 "params": { 00:17:36.501 "name": "Nvme$subsystem", 00:17:36.501 "trtype": "$TEST_TRANSPORT", 00:17:36.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:36.501 "adrfam": "ipv4", 00:17:36.502 "trsvcid": "$NVMF_PORT", 00:17:36.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:36.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:36.502 "hdgst": ${hdgst:-false}, 00:17:36.502 "ddgst": ${ddgst:-false} 00:17:36.502 }, 00:17:36.502 "method": "bdev_nvme_attach_controller" 00:17:36.502 } 00:17:36.502 EOF 00:17:36.502 )") 00:17:36.502 22:44:21 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1091017 00:17:36.502 22:44:21 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:36.502 22:44:21 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:36.502 22:44:21 -- nvmf/common.sh@520 -- # config=() 00:17:36.502 22:44:21 -- nvmf/common.sh@520 -- # local subsystem config 00:17:36.502 22:44:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:36.502 22:44:21 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1091020 00:17:36.502 22:44:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:36.502 { 00:17:36.502 "params": { 00:17:36.502 "name": "Nvme$subsystem", 00:17:36.502 "trtype": "$TEST_TRANSPORT", 00:17:36.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:36.502 "adrfam": "ipv4", 00:17:36.502 "trsvcid": "$NVMF_PORT", 00:17:36.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:36.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:36.502 "hdgst": ${hdgst:-false}, 00:17:36.502 "ddgst": ${ddgst:-false} 00:17:36.502 }, 00:17:36.502 "method": "bdev_nvme_attach_controller" 00:17:36.502 } 00:17:36.502 EOF 00:17:36.502 )") 00:17:36.502 22:44:21 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:36.502 22:44:21 -- target/bdev_io_wait.sh@35 -- # sync 00:17:36.502 22:44:21 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:36.502 22:44:21 -- nvmf/common.sh@520 -- # config=() 00:17:36.502 22:44:21 -- nvmf/common.sh@542 -- # cat 00:17:36.502 22:44:21 -- nvmf/common.sh@520 -- # local subsystem config 00:17:36.502 22:44:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:36.502 22:44:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:36.502 { 00:17:36.502 "params": { 00:17:36.502 "name": "Nvme$subsystem", 00:17:36.502 "trtype": "$TEST_TRANSPORT", 00:17:36.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:36.502 "adrfam": "ipv4", 00:17:36.502 "trsvcid": "$NVMF_PORT", 00:17:36.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:36.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:36.502 "hdgst": ${hdgst:-false}, 00:17:36.502 "ddgst": ${ddgst:-false} 00:17:36.502 }, 00:17:36.502 "method": "bdev_nvme_attach_controller" 00:17:36.502 } 00:17:36.502 EOF 00:17:36.502 )") 00:17:36.502 22:44:21 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:36.502 22:44:21 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:36.502 22:44:21 -- nvmf/common.sh@520 -- # config=() 00:17:36.502 22:44:21 -- nvmf/common.sh@520 -- # local subsystem config 00:17:36.502 22:44:21 -- nvmf/common.sh@542 -- # cat 00:17:36.502 22:44:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:36.502 22:44:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:36.502 { 00:17:36.502 "params": { 00:17:36.502 "name": "Nvme$subsystem", 00:17:36.502 "trtype": "$TEST_TRANSPORT", 00:17:36.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:36.502 "adrfam": "ipv4", 00:17:36.502 "trsvcid": "$NVMF_PORT", 00:17:36.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:36.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:36.502 "hdgst": ${hdgst:-false}, 00:17:36.502 "ddgst": ${ddgst:-false} 00:17:36.502 }, 00:17:36.502 "method": "bdev_nvme_attach_controller" 00:17:36.502 } 00:17:36.502 EOF 00:17:36.502 )") 00:17:36.502 22:44:21 -- nvmf/common.sh@542 -- # cat 00:17:36.502 22:44:21 -- target/bdev_io_wait.sh@37 -- # wait 1091013 00:17:36.502 22:44:21 -- nvmf/common.sh@542 -- # cat 00:17:36.502 22:44:21 -- nvmf/common.sh@544 -- # jq . 00:17:36.502 22:44:21 -- nvmf/common.sh@544 -- # jq . 00:17:36.502 22:44:21 -- nvmf/common.sh@544 -- # jq . 00:17:36.502 22:44:21 -- nvmf/common.sh@545 -- # IFS=, 00:17:36.502 22:44:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:36.502 "params": { 00:17:36.502 "name": "Nvme1", 00:17:36.502 "trtype": "tcp", 00:17:36.502 "traddr": "10.0.0.2", 00:17:36.502 "adrfam": "ipv4", 00:17:36.502 "trsvcid": "4420", 00:17:36.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:36.502 "hdgst": false, 00:17:36.502 "ddgst": false 00:17:36.502 }, 00:17:36.502 "method": "bdev_nvme_attach_controller" 00:17:36.502 }' 00:17:36.502 22:44:21 -- nvmf/common.sh@544 -- # jq . 00:17:36.502 22:44:21 -- nvmf/common.sh@545 -- # IFS=, 00:17:36.502 22:44:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:36.502 "params": { 00:17:36.502 "name": "Nvme1", 00:17:36.502 "trtype": "tcp", 00:17:36.502 "traddr": "10.0.0.2", 00:17:36.502 "adrfam": "ipv4", 00:17:36.502 "trsvcid": "4420", 00:17:36.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:36.502 "hdgst": false, 00:17:36.502 "ddgst": false 00:17:36.502 }, 00:17:36.502 "method": "bdev_nvme_attach_controller" 00:17:36.502 }' 00:17:36.502 22:44:21 -- nvmf/common.sh@545 -- # IFS=, 00:17:36.502 22:44:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:36.502 "params": { 00:17:36.502 "name": "Nvme1", 00:17:36.502 "trtype": "tcp", 00:17:36.502 "traddr": "10.0.0.2", 00:17:36.502 "adrfam": "ipv4", 00:17:36.502 "trsvcid": "4420", 00:17:36.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:36.502 "hdgst": false, 00:17:36.502 "ddgst": false 00:17:36.502 }, 00:17:36.502 "method": "bdev_nvme_attach_controller" 00:17:36.502 }' 00:17:36.502 22:44:21 -- nvmf/common.sh@545 -- # IFS=, 00:17:36.502 22:44:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:36.502 "params": { 00:17:36.502 "name": "Nvme1", 00:17:36.502 "trtype": "tcp", 00:17:36.502 "traddr": "10.0.0.2", 00:17:36.502 "adrfam": "ipv4", 00:17:36.502 "trsvcid": "4420", 00:17:36.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:36.502 "hdgst": false, 00:17:36.502 "ddgst": false 00:17:36.502 }, 00:17:36.502 "method": "bdev_nvme_attach_controller" 00:17:36.502 }' 00:17:36.502 [2024-04-15 22:44:21.304172] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:36.502 [2024-04-15 22:44:21.304224] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:36.502 [2024-04-15 22:44:21.304625] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:36.502 [2024-04-15 22:44:21.304669] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:36.502 [2024-04-15 22:44:21.309638] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:36.502 [2024-04-15 22:44:21.309684] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:36.763 [2024-04-15 22:44:21.310078] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:36.763 [2024-04-15 22:44:21.310121] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:36.763 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.764 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.764 [2024-04-15 22:44:21.465233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.764 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.764 [2024-04-15 22:44:21.513946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:36.764 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.764 [2024-04-15 22:44:21.520884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.764 [2024-04-15 22:44:21.570213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:36.764 [2024-04-15 22:44:21.571125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.024 [2024-04-15 22:44:21.607234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.024 [2024-04-15 22:44:21.620852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:37.024 [2024-04-15 22:44:21.654166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:37.024 Running I/O for 1 seconds... 00:17:37.024 Running I/O for 1 seconds... 00:17:37.024 Running I/O for 1 seconds... 00:17:37.284 Running I/O for 1 seconds... 00:17:38.224 00:17:38.224 Latency(us) 00:17:38.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.224 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:38.224 Nvme1n1 : 1.01 15798.82 61.71 0.00 0.00 8077.52 5079.04 18350.08 00:17:38.224 =================================================================================================================== 00:17:38.224 Total : 15798.82 61.71 0.00 0.00 8077.52 5079.04 18350.08 00:17:38.224 00:17:38.224 Latency(us) 00:17:38.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.224 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:38.224 Nvme1n1 : 1.01 7111.62 27.78 0.00 0.00 17906.65 7099.73 30146.56 00:17:38.224 =================================================================================================================== 00:17:38.224 Total : 7111.62 27.78 0.00 0.00 17906.65 7099.73 30146.56 00:17:38.224 00:17:38.224 Latency(us) 00:17:38.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.224 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:38.224 Nvme1n1 : 1.00 192405.43 751.58 0.00 0.00 662.96 262.83 740.69 00:17:38.224 =================================================================================================================== 00:17:38.224 Total : 192405.43 751.58 0.00 0.00 662.96 262.83 740.69 00:17:38.224 00:17:38.224 Latency(us) 00:17:38.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.224 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:38.224 Nvme1n1 : 1.01 7251.43 28.33 0.00 0.00 17601.12 4915.20 43035.31 00:17:38.224 =================================================================================================================== 00:17:38.224 Total : 7251.43 28.33 0.00 0.00 17601.12 4915.20 43035.31 00:17:38.485 22:44:23 -- target/bdev_io_wait.sh@38 -- # wait 1091015 00:17:38.485 22:44:23 -- target/bdev_io_wait.sh@39 -- # wait 1091017 00:17:38.485 22:44:23 -- target/bdev_io_wait.sh@40 -- # wait 1091020 00:17:38.485 22:44:23 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:38.485 22:44:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:38.485 22:44:23 -- common/autotest_common.sh@10 -- # set +x 00:17:38.485 22:44:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:38.485 22:44:23 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:38.485 22:44:23 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:38.485 22:44:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:38.485 22:44:23 -- nvmf/common.sh@116 -- # sync 00:17:38.485 22:44:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:38.485 22:44:23 -- nvmf/common.sh@119 -- # set +e 00:17:38.485 22:44:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:38.485 22:44:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:38.485 rmmod nvme_tcp 00:17:38.485 rmmod nvme_fabrics 00:17:38.485 rmmod nvme_keyring 00:17:38.485 22:44:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:38.485 22:44:23 -- nvmf/common.sh@123 -- # set -e 00:17:38.485 22:44:23 -- nvmf/common.sh@124 -- # return 0 00:17:38.485 22:44:23 -- nvmf/common.sh@477 -- # '[' -n 1090726 ']' 00:17:38.485 22:44:23 -- nvmf/common.sh@478 -- # killprocess 1090726 00:17:38.485 22:44:23 -- common/autotest_common.sh@926 -- # '[' -z 1090726 ']' 00:17:38.485 22:44:23 -- common/autotest_common.sh@930 -- # kill -0 1090726 00:17:38.485 22:44:23 -- common/autotest_common.sh@931 -- # uname 00:17:38.485 22:44:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:38.485 22:44:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1090726 00:17:38.485 22:44:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:38.485 22:44:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:38.485 22:44:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1090726' 00:17:38.485 killing process with pid 1090726 00:17:38.485 22:44:23 -- common/autotest_common.sh@945 -- # kill 1090726 00:17:38.485 22:44:23 -- common/autotest_common.sh@950 -- # wait 1090726 00:17:38.745 22:44:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:38.745 22:44:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:38.745 22:44:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:38.745 22:44:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:38.745 22:44:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:38.745 22:44:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.745 22:44:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.745 22:44:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.657 22:44:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:40.657 00:17:40.657 real 0m13.243s 00:17:40.657 user 0m19.002s 00:17:40.657 sys 0m7.253s 00:17:40.657 22:44:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.657 22:44:25 -- common/autotest_common.sh@10 -- # set +x 00:17:40.657 ************************************ 00:17:40.657 END TEST nvmf_bdev_io_wait 00:17:40.657 ************************************ 00:17:40.657 22:44:25 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:40.657 22:44:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:40.657 22:44:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:40.657 22:44:25 -- common/autotest_common.sh@10 -- # set +x 00:17:40.657 ************************************ 00:17:40.657 START TEST nvmf_queue_depth 00:17:40.657 ************************************ 00:17:40.657 22:44:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:40.917 * Looking for test storage... 00:17:40.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:40.917 22:44:25 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.917 22:44:25 -- nvmf/common.sh@7 -- # uname -s 00:17:40.917 22:44:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.917 22:44:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.917 22:44:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.917 22:44:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.917 22:44:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.917 22:44:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.917 22:44:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.917 22:44:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.917 22:44:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.917 22:44:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.917 22:44:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:40.917 22:44:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:40.917 22:44:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.917 22:44:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.917 22:44:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:40.917 22:44:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:40.917 22:44:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.917 22:44:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.917 22:44:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.917 22:44:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.918 22:44:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.918 22:44:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.918 22:44:25 -- paths/export.sh@5 -- # export PATH 00:17:40.918 22:44:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.918 22:44:25 -- nvmf/common.sh@46 -- # : 0 00:17:40.918 22:44:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:40.918 22:44:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:40.918 22:44:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:40.918 22:44:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.918 22:44:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.918 22:44:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:40.918 22:44:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:40.918 22:44:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:40.918 22:44:25 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:40.918 22:44:25 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:40.918 22:44:25 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:40.918 22:44:25 -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:40.918 22:44:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:40.918 22:44:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.918 22:44:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:40.918 22:44:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:40.918 22:44:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:40.918 22:44:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.918 22:44:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.918 22:44:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.918 22:44:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:40.918 22:44:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:40.918 22:44:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:40.918 22:44:25 -- common/autotest_common.sh@10 -- # set +x 00:17:49.055 22:44:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:49.055 22:44:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:49.055 22:44:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:49.055 22:44:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:49.055 22:44:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:49.055 22:44:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:49.055 22:44:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:49.055 22:44:33 -- nvmf/common.sh@294 -- # net_devs=() 00:17:49.055 22:44:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:49.055 22:44:33 -- nvmf/common.sh@295 -- # e810=() 00:17:49.055 22:44:33 -- nvmf/common.sh@295 -- # local -ga e810 00:17:49.055 22:44:33 -- nvmf/common.sh@296 -- # x722=() 00:17:49.055 22:44:33 -- nvmf/common.sh@296 -- # local -ga x722 00:17:49.055 22:44:33 -- nvmf/common.sh@297 -- # mlx=() 00:17:49.055 22:44:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:49.056 22:44:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:49.056 22:44:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:49.056 22:44:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:49.056 22:44:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:49.056 22:44:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:49.056 22:44:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:49.056 22:44:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:49.056 22:44:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:49.056 22:44:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:49.056 22:44:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:49.056 22:44:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:49.056 22:44:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:49.056 22:44:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:49.056 22:44:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:49.056 22:44:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:49.056 22:44:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:49.056 22:44:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:49.056 22:44:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:49.056 22:44:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:49.056 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:49.056 22:44:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:49.056 22:44:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:49.056 22:44:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.056 22:44:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.056 22:44:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:49.056 22:44:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:49.056 22:44:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:49.056 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:49.056 22:44:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:49.056 22:44:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:49.056 22:44:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.056 22:44:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.056 22:44:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:49.056 22:44:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:49.056 22:44:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:49.056 22:44:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:49.056 22:44:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:49.056 22:44:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.056 22:44:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:49.056 22:44:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.056 22:44:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:49.056 Found net devices under 0000:31:00.0: cvl_0_0 00:17:49.056 22:44:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.056 22:44:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:49.056 22:44:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.056 22:44:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:49.056 22:44:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.056 22:44:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:49.056 Found net devices under 0000:31:00.1: cvl_0_1 00:17:49.056 22:44:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.056 22:44:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:49.056 22:44:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:49.056 22:44:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:49.056 22:44:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:49.056 22:44:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:49.056 22:44:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.056 22:44:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:49.056 22:44:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:49.056 22:44:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:49.056 22:44:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:49.056 22:44:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:49.056 22:44:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:49.056 22:44:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:49.056 22:44:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.056 22:44:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:49.056 22:44:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:49.056 22:44:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:49.056 22:44:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:49.056 22:44:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:49.056 22:44:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:49.056 22:44:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:49.056 22:44:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:49.056 22:44:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:49.056 22:44:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:49.056 22:44:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:49.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:17:49.056 00:17:49.056 --- 10.0.0.2 ping statistics --- 00:17:49.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.056 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:17:49.056 22:44:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:49.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:17:49.056 00:17:49.056 --- 10.0.0.1 ping statistics --- 00:17:49.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.056 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:17:49.056 22:44:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.056 22:44:33 -- nvmf/common.sh@410 -- # return 0 00:17:49.056 22:44:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:49.056 22:44:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.056 22:44:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:49.056 22:44:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:49.056 22:44:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.056 22:44:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:49.056 22:44:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:49.056 22:44:33 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:49.056 22:44:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:49.056 22:44:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:49.056 22:44:33 -- common/autotest_common.sh@10 -- # set +x 00:17:49.056 22:44:33 -- nvmf/common.sh@469 -- # nvmfpid=1096073 00:17:49.056 22:44:33 -- nvmf/common.sh@470 -- # waitforlisten 1096073 00:17:49.056 22:44:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:49.056 22:44:33 -- common/autotest_common.sh@819 -- # '[' -z 1096073 ']' 00:17:49.056 22:44:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.056 22:44:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:49.056 22:44:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.056 22:44:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:49.056 22:44:33 -- common/autotest_common.sh@10 -- # set +x 00:17:49.056 [2024-04-15 22:44:33.506325] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:49.056 [2024-04-15 22:44:33.506376] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.056 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.056 [2024-04-15 22:44:33.580138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.056 [2024-04-15 22:44:33.641757] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:49.056 [2024-04-15 22:44:33.641877] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.056 [2024-04-15 22:44:33.641885] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.056 [2024-04-15 22:44:33.641892] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.056 [2024-04-15 22:44:33.641916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.627 22:44:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:49.627 22:44:34 -- common/autotest_common.sh@852 -- # return 0 00:17:49.627 22:44:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:49.627 22:44:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:49.627 22:44:34 -- common/autotest_common.sh@10 -- # set +x 00:17:49.627 22:44:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.627 22:44:34 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:49.627 22:44:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.627 22:44:34 -- common/autotest_common.sh@10 -- # set +x 00:17:49.627 [2024-04-15 22:44:34.296393] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.627 22:44:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.627 22:44:34 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:49.627 22:44:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.627 22:44:34 -- common/autotest_common.sh@10 -- # set +x 00:17:49.627 Malloc0 00:17:49.627 22:44:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.627 22:44:34 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:49.627 22:44:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.627 22:44:34 -- common/autotest_common.sh@10 -- # set +x 00:17:49.627 22:44:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.627 22:44:34 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:49.627 22:44:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.627 22:44:34 -- common/autotest_common.sh@10 -- # set +x 00:17:49.627 22:44:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.627 22:44:34 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.627 22:44:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.627 22:44:34 -- common/autotest_common.sh@10 -- # set +x 00:17:49.627 [2024-04-15 22:44:34.373826] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.627 22:44:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.627 22:44:34 -- target/queue_depth.sh@30 -- # bdevperf_pid=1096240 00:17:49.627 22:44:34 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:49.627 22:44:34 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:49.627 22:44:34 -- target/queue_depth.sh@33 -- # waitforlisten 1096240 /var/tmp/bdevperf.sock 00:17:49.627 22:44:34 -- common/autotest_common.sh@819 -- # '[' -z 1096240 ']' 00:17:49.627 22:44:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.627 22:44:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:49.627 22:44:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.627 22:44:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:49.627 22:44:34 -- common/autotest_common.sh@10 -- # set +x 00:17:49.627 [2024-04-15 22:44:34.424173] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:49.627 [2024-04-15 22:44:34.424218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1096240 ] 00:17:49.887 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.887 [2024-04-15 22:44:34.489268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.887 [2024-04-15 22:44:34.551942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.457 22:44:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:50.457 22:44:35 -- common/autotest_common.sh@852 -- # return 0 00:17:50.457 22:44:35 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:50.457 22:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:50.457 22:44:35 -- common/autotest_common.sh@10 -- # set +x 00:17:50.716 NVMe0n1 00:17:50.716 22:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:50.716 22:44:35 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:50.716 Running I/O for 10 seconds... 00:18:02.954 00:18:02.954 Latency(us) 00:18:02.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.954 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:02.954 Verification LBA range: start 0x0 length 0x4000 00:18:02.954 NVMe0n1 : 10.06 14286.03 55.80 0.00 0.00 71417.92 14308.69 52428.80 00:18:02.954 =================================================================================================================== 00:18:02.954 Total : 14286.03 55.80 0.00 0.00 71417.92 14308.69 52428.80 00:18:02.954 0 00:18:02.954 22:44:45 -- target/queue_depth.sh@39 -- # killprocess 1096240 00:18:02.954 22:44:45 -- common/autotest_common.sh@926 -- # '[' -z 1096240 ']' 00:18:02.954 22:44:45 -- common/autotest_common.sh@930 -- # kill -0 1096240 00:18:02.954 22:44:45 -- common/autotest_common.sh@931 -- # uname 00:18:02.954 22:44:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:02.954 22:44:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1096240 00:18:02.954 22:44:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:02.954 22:44:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:02.954 22:44:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1096240' 00:18:02.954 killing process with pid 1096240 00:18:02.954 22:44:45 -- common/autotest_common.sh@945 -- # kill 1096240 00:18:02.954 Received shutdown signal, test time was about 10.000000 seconds 00:18:02.954 00:18:02.954 Latency(us) 00:18:02.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.954 =================================================================================================================== 00:18:02.954 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:02.954 22:44:45 -- common/autotest_common.sh@950 -- # wait 1096240 00:18:02.954 22:44:45 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:02.954 22:44:45 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:02.954 22:44:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:02.954 22:44:45 -- nvmf/common.sh@116 -- # sync 00:18:02.954 22:44:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:02.954 22:44:45 -- nvmf/common.sh@119 -- # set +e 00:18:02.954 22:44:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:02.954 22:44:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:02.954 rmmod nvme_tcp 00:18:02.954 rmmod nvme_fabrics 00:18:02.954 rmmod nvme_keyring 00:18:02.954 22:44:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:02.954 22:44:45 -- nvmf/common.sh@123 -- # set -e 00:18:02.954 22:44:45 -- nvmf/common.sh@124 -- # return 0 00:18:02.954 22:44:45 -- nvmf/common.sh@477 -- # '[' -n 1096073 ']' 00:18:02.954 22:44:45 -- nvmf/common.sh@478 -- # killprocess 1096073 00:18:02.954 22:44:45 -- common/autotest_common.sh@926 -- # '[' -z 1096073 ']' 00:18:02.954 22:44:45 -- common/autotest_common.sh@930 -- # kill -0 1096073 00:18:02.954 22:44:45 -- common/autotest_common.sh@931 -- # uname 00:18:02.954 22:44:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:02.954 22:44:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1096073 00:18:02.954 22:44:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:02.954 22:44:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:02.954 22:44:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1096073' 00:18:02.954 killing process with pid 1096073 00:18:02.954 22:44:45 -- common/autotest_common.sh@945 -- # kill 1096073 00:18:02.954 22:44:45 -- common/autotest_common.sh@950 -- # wait 1096073 00:18:02.955 22:44:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:02.955 22:44:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:02.955 22:44:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:02.955 22:44:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:02.955 22:44:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:02.955 22:44:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.955 22:44:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.955 22:44:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.530 22:44:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:03.530 00:18:03.530 real 0m22.711s 00:18:03.530 user 0m25.885s 00:18:03.530 sys 0m6.923s 00:18:03.530 22:44:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:03.530 22:44:48 -- common/autotest_common.sh@10 -- # set +x 00:18:03.530 ************************************ 00:18:03.530 END TEST nvmf_queue_depth 00:18:03.530 ************************************ 00:18:03.530 22:44:48 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:03.530 22:44:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:03.530 22:44:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:03.530 22:44:48 -- common/autotest_common.sh@10 -- # set +x 00:18:03.530 ************************************ 00:18:03.530 START TEST nvmf_multipath 00:18:03.530 ************************************ 00:18:03.530 22:44:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:03.530 * Looking for test storage... 00:18:03.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:03.530 22:44:48 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.530 22:44:48 -- nvmf/common.sh@7 -- # uname -s 00:18:03.530 22:44:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.530 22:44:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.530 22:44:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.530 22:44:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.530 22:44:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.530 22:44:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.530 22:44:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.530 22:44:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.530 22:44:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.530 22:44:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.530 22:44:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:03.530 22:44:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:03.530 22:44:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.530 22:44:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.530 22:44:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.530 22:44:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:03.530 22:44:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.530 22:44:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.530 22:44:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.530 22:44:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.530 22:44:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.530 22:44:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.530 22:44:48 -- paths/export.sh@5 -- # export PATH 00:18:03.530 22:44:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.530 22:44:48 -- nvmf/common.sh@46 -- # : 0 00:18:03.530 22:44:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:03.530 22:44:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:03.530 22:44:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:03.530 22:44:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.530 22:44:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.530 22:44:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:03.530 22:44:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:03.530 22:44:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:03.530 22:44:48 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:03.530 22:44:48 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:03.530 22:44:48 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:03.530 22:44:48 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:03.530 22:44:48 -- target/multipath.sh@43 -- # nvmftestinit 00:18:03.530 22:44:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:03.530 22:44:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.530 22:44:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:03.530 22:44:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:03.530 22:44:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:03.530 22:44:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.530 22:44:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.530 22:44:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.530 22:44:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:03.530 22:44:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:03.530 22:44:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:03.530 22:44:48 -- common/autotest_common.sh@10 -- # set +x 00:18:11.745 22:44:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:11.745 22:44:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:11.745 22:44:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:11.745 22:44:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:11.745 22:44:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:11.745 22:44:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:11.745 22:44:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:11.745 22:44:55 -- nvmf/common.sh@294 -- # net_devs=() 00:18:11.745 22:44:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:11.745 22:44:55 -- nvmf/common.sh@295 -- # e810=() 00:18:11.745 22:44:55 -- nvmf/common.sh@295 -- # local -ga e810 00:18:11.745 22:44:55 -- nvmf/common.sh@296 -- # x722=() 00:18:11.745 22:44:55 -- nvmf/common.sh@296 -- # local -ga x722 00:18:11.745 22:44:55 -- nvmf/common.sh@297 -- # mlx=() 00:18:11.745 22:44:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:11.745 22:44:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:11.745 22:44:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:11.745 22:44:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:11.745 22:44:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:11.745 22:44:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:11.745 22:44:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:11.745 22:44:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:11.745 22:44:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:11.745 22:44:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:11.745 22:44:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:11.745 22:44:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:11.745 22:44:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:11.745 22:44:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:11.745 22:44:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:11.745 22:44:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:11.745 22:44:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:11.745 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:11.745 22:44:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:11.745 22:44:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:11.745 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:11.745 22:44:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:11.745 22:44:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:11.745 22:44:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.745 22:44:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:11.745 22:44:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.745 22:44:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:11.745 Found net devices under 0000:31:00.0: cvl_0_0 00:18:11.745 22:44:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.745 22:44:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:11.745 22:44:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.745 22:44:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:11.745 22:44:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.745 22:44:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:11.745 Found net devices under 0000:31:00.1: cvl_0_1 00:18:11.745 22:44:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.745 22:44:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:11.745 22:44:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:11.745 22:44:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:11.745 22:44:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.745 22:44:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.745 22:44:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:11.745 22:44:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:11.745 22:44:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:11.745 22:44:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:11.745 22:44:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:11.745 22:44:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:11.745 22:44:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.745 22:44:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:11.745 22:44:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:11.745 22:44:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:11.745 22:44:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:11.745 22:44:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:11.745 22:44:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:11.745 22:44:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:11.745 22:44:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:11.745 22:44:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:11.745 22:44:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:11.745 22:44:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:11.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:18:11.745 00:18:11.745 --- 10.0.0.2 ping statistics --- 00:18:11.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.745 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:18:11.745 22:44:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:11.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:18:11.745 00:18:11.745 --- 10.0.0.1 ping statistics --- 00:18:11.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.745 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:18:11.745 22:44:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.745 22:44:55 -- nvmf/common.sh@410 -- # return 0 00:18:11.745 22:44:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:11.745 22:44:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.745 22:44:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.745 22:44:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:11.745 22:44:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:11.745 22:44:55 -- target/multipath.sh@45 -- # '[' -z ']' 00:18:11.745 22:44:55 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:11.745 only one NIC for nvmf test 00:18:11.745 22:44:55 -- target/multipath.sh@47 -- # nvmftestfini 00:18:11.745 22:44:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:11.745 22:44:55 -- nvmf/common.sh@116 -- # sync 00:18:11.745 22:44:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:11.745 22:44:55 -- nvmf/common.sh@119 -- # set +e 00:18:11.745 22:44:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:11.745 22:44:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:11.745 rmmod nvme_tcp 00:18:11.745 rmmod nvme_fabrics 00:18:11.745 rmmod nvme_keyring 00:18:11.745 22:44:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:11.745 22:44:55 -- nvmf/common.sh@123 -- # set -e 00:18:11.745 22:44:55 -- nvmf/common.sh@124 -- # return 0 00:18:11.745 22:44:55 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:11.745 22:44:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:11.745 22:44:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:11.745 22:44:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:11.745 22:44:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:11.745 22:44:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.745 22:44:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.745 22:44:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.658 22:44:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:13.658 22:44:58 -- target/multipath.sh@48 -- # exit 0 00:18:13.658 22:44:58 -- target/multipath.sh@1 -- # nvmftestfini 00:18:13.658 22:44:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:13.658 22:44:58 -- nvmf/common.sh@116 -- # sync 00:18:13.658 22:44:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:13.658 22:44:58 -- nvmf/common.sh@119 -- # set +e 00:18:13.658 22:44:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:13.658 22:44:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:13.658 22:44:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:13.658 22:44:58 -- nvmf/common.sh@123 -- # set -e 00:18:13.658 22:44:58 -- nvmf/common.sh@124 -- # return 0 00:18:13.658 22:44:58 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:13.658 22:44:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:13.658 22:44:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:13.658 22:44:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:13.658 22:44:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.658 22:44:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:13.658 22:44:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.658 22:44:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.658 22:44:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.658 22:44:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:13.658 00:18:13.658 real 0m9.916s 00:18:13.658 user 0m2.114s 00:18:13.658 sys 0m5.649s 00:18:13.658 22:44:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:13.658 22:44:58 -- common/autotest_common.sh@10 -- # set +x 00:18:13.658 ************************************ 00:18:13.658 END TEST nvmf_multipath 00:18:13.658 ************************************ 00:18:13.658 22:44:58 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:13.658 22:44:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:13.658 22:44:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:13.658 22:44:58 -- common/autotest_common.sh@10 -- # set +x 00:18:13.658 ************************************ 00:18:13.658 START TEST nvmf_zcopy 00:18:13.658 ************************************ 00:18:13.658 22:44:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:13.658 * Looking for test storage... 00:18:13.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:13.658 22:44:58 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.658 22:44:58 -- nvmf/common.sh@7 -- # uname -s 00:18:13.658 22:44:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.658 22:44:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.658 22:44:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.658 22:44:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.658 22:44:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.658 22:44:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.658 22:44:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.658 22:44:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.658 22:44:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.658 22:44:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.658 22:44:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:13.658 22:44:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:13.659 22:44:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.659 22:44:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.659 22:44:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.659 22:44:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:13.659 22:44:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.659 22:44:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.659 22:44:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.659 22:44:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.659 22:44:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.659 22:44:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.659 22:44:58 -- paths/export.sh@5 -- # export PATH 00:18:13.659 22:44:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.659 22:44:58 -- nvmf/common.sh@46 -- # : 0 00:18:13.659 22:44:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:13.659 22:44:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:13.659 22:44:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:13.659 22:44:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.659 22:44:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.659 22:44:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:13.659 22:44:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:13.659 22:44:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:13.659 22:44:58 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:13.659 22:44:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:13.659 22:44:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.659 22:44:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:13.659 22:44:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:13.659 22:44:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:13.659 22:44:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.659 22:44:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.659 22:44:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.659 22:44:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:13.659 22:44:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:13.659 22:44:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:13.659 22:44:58 -- common/autotest_common.sh@10 -- # set +x 00:18:21.813 22:45:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:21.813 22:45:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:21.813 22:45:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:21.813 22:45:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:21.813 22:45:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:21.813 22:45:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:21.813 22:45:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:21.813 22:45:06 -- nvmf/common.sh@294 -- # net_devs=() 00:18:21.813 22:45:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:21.813 22:45:06 -- nvmf/common.sh@295 -- # e810=() 00:18:21.813 22:45:06 -- nvmf/common.sh@295 -- # local -ga e810 00:18:21.813 22:45:06 -- nvmf/common.sh@296 -- # x722=() 00:18:21.813 22:45:06 -- nvmf/common.sh@296 -- # local -ga x722 00:18:21.813 22:45:06 -- nvmf/common.sh@297 -- # mlx=() 00:18:21.813 22:45:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:21.813 22:45:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:21.813 22:45:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:21.813 22:45:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:21.813 22:45:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:21.813 22:45:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:21.813 22:45:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:21.813 22:45:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:21.813 22:45:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:21.813 22:45:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:21.813 22:45:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:21.813 22:45:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:21.813 22:45:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:21.813 22:45:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:21.813 22:45:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:21.813 22:45:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:21.813 22:45:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:21.813 22:45:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:21.813 22:45:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:21.813 22:45:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:21.813 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:21.813 22:45:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:21.813 22:45:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:21.813 22:45:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.813 22:45:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.813 22:45:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:21.813 22:45:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:21.813 22:45:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:21.813 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:21.813 22:45:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:21.813 22:45:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:21.813 22:45:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.813 22:45:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.813 22:45:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:21.813 22:45:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:21.813 22:45:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:21.813 22:45:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:21.813 22:45:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:21.813 22:45:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.813 22:45:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:21.813 22:45:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.813 22:45:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:21.813 Found net devices under 0000:31:00.0: cvl_0_0 00:18:21.813 22:45:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.813 22:45:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:21.813 22:45:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.813 22:45:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:21.813 22:45:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.813 22:45:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:21.813 Found net devices under 0000:31:00.1: cvl_0_1 00:18:21.813 22:45:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.813 22:45:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:21.813 22:45:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:21.813 22:45:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:21.813 22:45:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:21.813 22:45:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:21.813 22:45:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:21.813 22:45:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:21.813 22:45:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:21.813 22:45:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:21.813 22:45:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:21.813 22:45:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:21.813 22:45:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:21.813 22:45:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:21.813 22:45:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:21.813 22:45:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:21.813 22:45:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:21.813 22:45:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:21.813 22:45:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:21.813 22:45:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:21.813 22:45:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:21.813 22:45:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:21.813 22:45:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:21.813 22:45:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:21.813 22:45:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:21.813 22:45:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:21.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:21.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:18:21.813 00:18:21.813 --- 10.0.0.2 ping statistics --- 00:18:21.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.813 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:18:21.813 22:45:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:21.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:21.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:18:21.813 00:18:21.813 --- 10.0.0.1 ping statistics --- 00:18:21.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.813 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:18:21.813 22:45:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:21.813 22:45:06 -- nvmf/common.sh@410 -- # return 0 00:18:21.813 22:45:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:21.813 22:45:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:21.813 22:45:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:21.813 22:45:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:21.813 22:45:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:21.813 22:45:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:21.813 22:45:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:21.813 22:45:06 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:21.813 22:45:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:21.813 22:45:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:21.813 22:45:06 -- common/autotest_common.sh@10 -- # set +x 00:18:21.813 22:45:06 -- nvmf/common.sh@469 -- # nvmfpid=1107966 00:18:21.813 22:45:06 -- nvmf/common.sh@470 -- # waitforlisten 1107966 00:18:21.813 22:45:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:21.813 22:45:06 -- common/autotest_common.sh@819 -- # '[' -z 1107966 ']' 00:18:21.813 22:45:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.813 22:45:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:21.813 22:45:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.813 22:45:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:21.813 22:45:06 -- common/autotest_common.sh@10 -- # set +x 00:18:21.813 [2024-04-15 22:45:06.452750] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:21.813 [2024-04-15 22:45:06.452834] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.813 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.813 [2024-04-15 22:45:06.548969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.813 [2024-04-15 22:45:06.616326] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:21.813 [2024-04-15 22:45:06.616456] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.814 [2024-04-15 22:45:06.616466] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.814 [2024-04-15 22:45:06.616473] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.814 [2024-04-15 22:45:06.616493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.758 22:45:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:22.758 22:45:07 -- common/autotest_common.sh@852 -- # return 0 00:18:22.758 22:45:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:22.758 22:45:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:22.758 22:45:07 -- common/autotest_common.sh@10 -- # set +x 00:18:22.758 22:45:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.758 22:45:07 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:22.758 22:45:07 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:22.758 22:45:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:22.758 22:45:07 -- common/autotest_common.sh@10 -- # set +x 00:18:22.758 [2024-04-15 22:45:07.244289] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.758 22:45:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:22.758 22:45:07 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:22.758 22:45:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:22.758 22:45:07 -- common/autotest_common.sh@10 -- # set +x 00:18:22.758 22:45:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:22.758 22:45:07 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.758 22:45:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:22.758 22:45:07 -- common/autotest_common.sh@10 -- # set +x 00:18:22.758 [2024-04-15 22:45:07.268464] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.758 22:45:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:22.758 22:45:07 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:22.758 22:45:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:22.758 22:45:07 -- common/autotest_common.sh@10 -- # set +x 00:18:22.758 22:45:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:22.758 22:45:07 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:22.758 22:45:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:22.758 22:45:07 -- common/autotest_common.sh@10 -- # set +x 00:18:22.758 malloc0 00:18:22.758 22:45:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:22.758 22:45:07 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:22.758 22:45:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:22.758 22:45:07 -- common/autotest_common.sh@10 -- # set +x 00:18:22.758 22:45:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:22.758 22:45:07 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:22.758 22:45:07 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:22.758 22:45:07 -- nvmf/common.sh@520 -- # config=() 00:18:22.758 22:45:07 -- nvmf/common.sh@520 -- # local subsystem config 00:18:22.758 22:45:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:22.758 22:45:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:22.758 { 00:18:22.758 "params": { 00:18:22.758 "name": "Nvme$subsystem", 00:18:22.758 "trtype": "$TEST_TRANSPORT", 00:18:22.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:22.758 "adrfam": "ipv4", 00:18:22.758 "trsvcid": "$NVMF_PORT", 00:18:22.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:22.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:22.758 "hdgst": ${hdgst:-false}, 00:18:22.758 "ddgst": ${ddgst:-false} 00:18:22.758 }, 00:18:22.758 "method": "bdev_nvme_attach_controller" 00:18:22.758 } 00:18:22.758 EOF 00:18:22.758 )") 00:18:22.758 22:45:07 -- nvmf/common.sh@542 -- # cat 00:18:22.758 22:45:07 -- nvmf/common.sh@544 -- # jq . 00:18:22.758 22:45:07 -- nvmf/common.sh@545 -- # IFS=, 00:18:22.758 22:45:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:22.758 "params": { 00:18:22.758 "name": "Nvme1", 00:18:22.758 "trtype": "tcp", 00:18:22.758 "traddr": "10.0.0.2", 00:18:22.758 "adrfam": "ipv4", 00:18:22.758 "trsvcid": "4420", 00:18:22.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.758 "hdgst": false, 00:18:22.758 "ddgst": false 00:18:22.758 }, 00:18:22.758 "method": "bdev_nvme_attach_controller" 00:18:22.758 }' 00:18:22.758 [2024-04-15 22:45:07.357307] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:22.758 [2024-04-15 22:45:07.357355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1108341 ] 00:18:22.758 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.758 [2024-04-15 22:45:07.421871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.758 [2024-04-15 22:45:07.484583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.027 Running I/O for 10 seconds... 00:18:33.027 00:18:33.027 Latency(us) 00:18:33.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.027 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:33.027 Verification LBA range: start 0x0 length 0x1000 00:18:33.027 Nvme1n1 : 10.05 10159.41 79.37 0.00 0.00 12524.16 3577.17 45001.39 00:18:33.027 =================================================================================================================== 00:18:33.027 Total : 10159.41 79.37 0.00 0.00 12524.16 3577.17 45001.39 00:18:33.288 22:45:17 -- target/zcopy.sh@39 -- # perfpid=1110683 00:18:33.288 22:45:17 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:33.288 22:45:17 -- common/autotest_common.sh@10 -- # set +x 00:18:33.288 22:45:17 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:33.288 22:45:17 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:33.288 22:45:17 -- nvmf/common.sh@520 -- # config=() 00:18:33.288 22:45:17 -- nvmf/common.sh@520 -- # local subsystem config 00:18:33.288 22:45:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:33.288 22:45:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:33.288 { 00:18:33.288 "params": { 00:18:33.288 "name": "Nvme$subsystem", 00:18:33.288 "trtype": "$TEST_TRANSPORT", 00:18:33.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.288 "adrfam": "ipv4", 00:18:33.288 "trsvcid": "$NVMF_PORT", 00:18:33.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.288 "hdgst": ${hdgst:-false}, 00:18:33.288 "ddgst": ${ddgst:-false} 00:18:33.288 }, 00:18:33.288 "method": "bdev_nvme_attach_controller" 00:18:33.288 } 00:18:33.288 EOF 00:18:33.288 )") 00:18:33.288 [2024-04-15 22:45:17.958262] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.288 [2024-04-15 22:45:17.958295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.288 22:45:17 -- nvmf/common.sh@542 -- # cat 00:18:33.288 22:45:17 -- nvmf/common.sh@544 -- # jq . 00:18:33.288 22:45:17 -- nvmf/common.sh@545 -- # IFS=, 00:18:33.288 22:45:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:33.288 "params": { 00:18:33.288 "name": "Nvme1", 00:18:33.288 "trtype": "tcp", 00:18:33.288 "traddr": "10.0.0.2", 00:18:33.288 "adrfam": "ipv4", 00:18:33.288 "trsvcid": "4420", 00:18:33.288 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.288 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.288 "hdgst": false, 00:18:33.288 "ddgst": false 00:18:33.288 }, 00:18:33.288 "method": "bdev_nvme_attach_controller" 00:18:33.288 }' 00:18:33.288 [2024-04-15 22:45:17.970264] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.288 [2024-04-15 22:45:17.970276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.288 [2024-04-15 22:45:17.982292] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.288 [2024-04-15 22:45:17.982303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.288 [2024-04-15 22:45:17.994325] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.288 [2024-04-15 22:45:17.994336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.288 [2024-04-15 22:45:17.997402] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:33.288 [2024-04-15 22:45:17.997451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1110683 ] 00:18:33.288 [2024-04-15 22:45:18.006359] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.288 [2024-04-15 22:45:18.006369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.288 [2024-04-15 22:45:18.018389] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.288 [2024-04-15 22:45:18.018399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.288 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.288 [2024-04-15 22:45:18.030423] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.288 [2024-04-15 22:45:18.030432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.288 [2024-04-15 22:45:18.042454] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.288 [2024-04-15 22:45:18.042464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.288 [2024-04-15 22:45:18.054486] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.288 [2024-04-15 22:45:18.054496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.288 [2024-04-15 22:45:18.061635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.288 [2024-04-15 22:45:18.066518] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.288 [2024-04-15 22:45:18.066529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.288 [2024-04-15 22:45:18.078550] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.289 [2024-04-15 22:45:18.078561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.289 [2024-04-15 22:45:18.090584] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.289 [2024-04-15 22:45:18.090595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.549 [2024-04-15 22:45:18.102614] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.549 [2024-04-15 22:45:18.102628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.549 [2024-04-15 22:45:18.114646] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.549 [2024-04-15 22:45:18.114657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.549 [2024-04-15 22:45:18.124094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.549 [2024-04-15 22:45:18.126678] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.549 [2024-04-15 22:45:18.126688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.549 [2024-04-15 22:45:18.138711] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.549 [2024-04-15 22:45:18.138723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.549 [2024-04-15 22:45:18.150747] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.549 [2024-04-15 22:45:18.150761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.549 [2024-04-15 22:45:18.162778] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.549 [2024-04-15 22:45:18.162790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.549 [2024-04-15 22:45:18.174811] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.549 [2024-04-15 22:45:18.174821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.549 [2024-04-15 22:45:18.186840] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.549 [2024-04-15 22:45:18.186850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.549 [2024-04-15 22:45:18.198881] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.549 [2024-04-15 22:45:18.198904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.549 [2024-04-15 22:45:18.210910] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.550 [2024-04-15 22:45:18.210921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.550 [2024-04-15 22:45:18.222943] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.550 [2024-04-15 22:45:18.222955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.550 [2024-04-15 22:45:18.234975] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.550 [2024-04-15 22:45:18.234986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.550 [2024-04-15 22:45:18.247010] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.550 [2024-04-15 22:45:18.247018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.550 [2024-04-15 22:45:18.259039] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.550 [2024-04-15 22:45:18.259048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.550 [2024-04-15 22:45:18.271077] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.550 [2024-04-15 22:45:18.271089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.550 [2024-04-15 22:45:18.283107] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.550 [2024-04-15 22:45:18.283116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.550 [2024-04-15 22:45:18.295139] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.550 [2024-04-15 22:45:18.295148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.550 [2024-04-15 22:45:18.307173] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.550 [2024-04-15 22:45:18.307182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.550 [2024-04-15 22:45:18.319206] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.550 [2024-04-15 22:45:18.319217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.550 [2024-04-15 22:45:18.331236] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.550 [2024-04-15 22:45:18.331245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.550 [2024-04-15 22:45:18.343269] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.550 [2024-04-15 22:45:18.343277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.550 [2024-04-15 22:45:18.355305] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.550 [2024-04-15 22:45:18.355314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.810 [2024-04-15 22:45:18.367338] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.810 [2024-04-15 22:45:18.367348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.810 [2024-04-15 22:45:18.379381] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.810 [2024-04-15 22:45:18.379399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.810 Running I/O for 5 seconds... 00:18:33.810 [2024-04-15 22:45:18.396734] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.810 [2024-04-15 22:45:18.396752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.810 [2024-04-15 22:45:18.413261] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.810 [2024-04-15 22:45:18.413279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.810 [2024-04-15 22:45:18.430385] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.810 [2024-04-15 22:45:18.430404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.810 [2024-04-15 22:45:18.446933] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.810 [2024-04-15 22:45:18.446956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.810 [2024-04-15 22:45:18.463432] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.810 [2024-04-15 22:45:18.463451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.810 [2024-04-15 22:45:18.480166] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.810 [2024-04-15 22:45:18.480184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.810 [2024-04-15 22:45:18.497300] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.810 [2024-04-15 22:45:18.497318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.810 [2024-04-15 22:45:18.513965] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.810 [2024-04-15 22:45:18.513983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.811 [2024-04-15 22:45:18.531158] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.811 [2024-04-15 22:45:18.531175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.811 [2024-04-15 22:45:18.547834] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.811 [2024-04-15 22:45:18.547851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.811 [2024-04-15 22:45:18.564630] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.811 [2024-04-15 22:45:18.564649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.811 [2024-04-15 22:45:18.581201] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.811 [2024-04-15 22:45:18.581219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.811 [2024-04-15 22:45:18.597768] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.811 [2024-04-15 22:45:18.597786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.811 [2024-04-15 22:45:18.614722] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.811 [2024-04-15 22:45:18.614740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.070 [2024-04-15 22:45:18.631586] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.070 [2024-04-15 22:45:18.631603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.070 [2024-04-15 22:45:18.648289] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.070 [2024-04-15 22:45:18.648308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.070 [2024-04-15 22:45:18.664792] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.070 [2024-04-15 22:45:18.664810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.070 [2024-04-15 22:45:18.681765] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.070 [2024-04-15 22:45:18.681783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.070 [2024-04-15 22:45:18.698391] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.070 [2024-04-15 22:45:18.698408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.070 [2024-04-15 22:45:18.714858] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.070 [2024-04-15 22:45:18.714876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.070 [2024-04-15 22:45:18.732002] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.070 [2024-04-15 22:45:18.732019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.070 [2024-04-15 22:45:18.748088] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.070 [2024-04-15 22:45:18.748106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.070 [2024-04-15 22:45:18.759093] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.070 [2024-04-15 22:45:18.759116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.070 [2024-04-15 22:45:18.774887] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.070 [2024-04-15 22:45:18.774906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.070 [2024-04-15 22:45:18.791767] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.070 [2024-04-15 22:45:18.791785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.070 [2024-04-15 22:45:18.808506] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.070 [2024-04-15 22:45:18.808524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.070 [2024-04-15 22:45:18.825698] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.070 [2024-04-15 22:45:18.825716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.070 [2024-04-15 22:45:18.842626] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.070 [2024-04-15 22:45:18.842645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.070 [2024-04-15 22:45:18.859227] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.070 [2024-04-15 22:45:18.859245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.070 [2024-04-15 22:45:18.875658] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.070 [2024-04-15 22:45:18.875676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.330 [2024-04-15 22:45:18.892877] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.330 [2024-04-15 22:45:18.892895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.330 [2024-04-15 22:45:18.909478] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.330 [2024-04-15 22:45:18.909496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.330 [2024-04-15 22:45:18.926332] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.330 [2024-04-15 22:45:18.926350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.330 [2024-04-15 22:45:18.943222] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.330 [2024-04-15 22:45:18.943240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.330 [2024-04-15 22:45:18.959476] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.330 [2024-04-15 22:45:18.959494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.330 [2024-04-15 22:45:18.976015] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.330 [2024-04-15 22:45:18.976033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.330 [2024-04-15 22:45:18.992773] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.330 [2024-04-15 22:45:18.992790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.330 [2024-04-15 22:45:19.009196] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.330 [2024-04-15 22:45:19.009214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.330 [2024-04-15 22:45:19.025823] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.330 [2024-04-15 22:45:19.025841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.330 [2024-04-15 22:45:19.042137] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.330 [2024-04-15 22:45:19.042154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.330 [2024-04-15 22:45:19.058922] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.330 [2024-04-15 22:45:19.058940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.330 [2024-04-15 22:45:19.075057] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.330 [2024-04-15 22:45:19.075075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.330 [2024-04-15 22:45:19.091886] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.330 [2024-04-15 22:45:19.091903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.330 [2024-04-15 22:45:19.108928] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.330 [2024-04-15 22:45:19.108946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.330 [2024-04-15 22:45:19.125436] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.330 [2024-04-15 22:45:19.125453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.590 [2024-04-15 22:45:19.142154] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.590 [2024-04-15 22:45:19.142172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.590 [2024-04-15 22:45:19.159228] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.590 [2024-04-15 22:45:19.159246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.590 [2024-04-15 22:45:19.176058] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.590 [2024-04-15 22:45:19.176075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.590 [2024-04-15 22:45:19.192631] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.590 [2024-04-15 22:45:19.192648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.590 [2024-04-15 22:45:19.209979] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.590 [2024-04-15 22:45:19.209997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.590 [2024-04-15 22:45:19.226435] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.590 [2024-04-15 22:45:19.226453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.590 [2024-04-15 22:45:19.243248] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.590 [2024-04-15 22:45:19.243266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.590 [2024-04-15 22:45:19.259764] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.590 [2024-04-15 22:45:19.259781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.590 [2024-04-15 22:45:19.276913] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.590 [2024-04-15 22:45:19.276931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.590 [2024-04-15 22:45:19.293423] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.590 [2024-04-15 22:45:19.293440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.590 [2024-04-15 22:45:19.310112] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.590 [2024-04-15 22:45:19.310131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.590 [2024-04-15 22:45:19.327146] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.590 [2024-04-15 22:45:19.327164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.590 [2024-04-15 22:45:19.343366] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.590 [2024-04-15 22:45:19.343384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.590 [2024-04-15 22:45:19.359812] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.590 [2024-04-15 22:45:19.359830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.590 [2024-04-15 22:45:19.376722] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.590 [2024-04-15 22:45:19.376740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.590 [2024-04-15 22:45:19.393018] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.590 [2024-04-15 22:45:19.393036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.850 [2024-04-15 22:45:19.410301] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.850 [2024-04-15 22:45:19.410319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.850 [2024-04-15 22:45:19.426852] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.850 [2024-04-15 22:45:19.426869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.850 [2024-04-15 22:45:19.443963] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.850 [2024-04-15 22:45:19.443981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.850 [2024-04-15 22:45:19.460771] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.850 [2024-04-15 22:45:19.460789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.850 [2024-04-15 22:45:19.477310] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.850 [2024-04-15 22:45:19.477328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.850 [2024-04-15 22:45:19.493843] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.850 [2024-04-15 22:45:19.493861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.850 [2024-04-15 22:45:19.510945] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.850 [2024-04-15 22:45:19.510963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.850 [2024-04-15 22:45:19.527321] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.850 [2024-04-15 22:45:19.527339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.850 [2024-04-15 22:45:19.544199] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.850 [2024-04-15 22:45:19.544217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.850 [2024-04-15 22:45:19.560745] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.850 [2024-04-15 22:45:19.560762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.850 [2024-04-15 22:45:19.577314] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.850 [2024-04-15 22:45:19.577332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.850 [2024-04-15 22:45:19.594408] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.850 [2024-04-15 22:45:19.594425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.850 [2024-04-15 22:45:19.611007] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.850 [2024-04-15 22:45:19.611025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.850 [2024-04-15 22:45:19.628083] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.850 [2024-04-15 22:45:19.628100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.850 [2024-04-15 22:45:19.644828] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.850 [2024-04-15 22:45:19.644846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.112 [2024-04-15 22:45:19.661919] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.112 [2024-04-15 22:45:19.661936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.112 [2024-04-15 22:45:19.678373] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.112 [2024-04-15 22:45:19.678391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.112 [2024-04-15 22:45:19.694822] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.112 [2024-04-15 22:45:19.694840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.112 [2024-04-15 22:45:19.711656] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.112 [2024-04-15 22:45:19.711675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.112 [2024-04-15 22:45:19.728530] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.112 [2024-04-15 22:45:19.728554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.112 [2024-04-15 22:45:19.745245] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.112 [2024-04-15 22:45:19.745263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.112 [2024-04-15 22:45:19.761261] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.112 [2024-04-15 22:45:19.761279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.112 [2024-04-15 22:45:19.778636] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.112 [2024-04-15 22:45:19.778654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.112 [2024-04-15 22:45:19.794978] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.112 [2024-04-15 22:45:19.794995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.112 [2024-04-15 22:45:19.812131] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.112 [2024-04-15 22:45:19.812149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.112 [2024-04-15 22:45:19.828735] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.112 [2024-04-15 22:45:19.828754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.112 [2024-04-15 22:45:19.845654] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.112 [2024-04-15 22:45:19.845672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.112 [2024-04-15 22:45:19.862017] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.112 [2024-04-15 22:45:19.862034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.112 [2024-04-15 22:45:19.879177] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.112 [2024-04-15 22:45:19.879194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.112 [2024-04-15 22:45:19.895895] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.112 [2024-04-15 22:45:19.895913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.112 [2024-04-15 22:45:19.912312] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.112 [2024-04-15 22:45:19.912331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.373 [2024-04-15 22:45:19.929143] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.374 [2024-04-15 22:45:19.929161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.374 [2024-04-15 22:45:19.946148] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.374 [2024-04-15 22:45:19.946166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.374 [2024-04-15 22:45:19.962377] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.374 [2024-04-15 22:45:19.962394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.374 [2024-04-15 22:45:19.978907] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.374 [2024-04-15 22:45:19.978924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.374 [2024-04-15 22:45:19.995889] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.374 [2024-04-15 22:45:19.995907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.374 [2024-04-15 22:45:20.011842] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.374 [2024-04-15 22:45:20.011865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.374 [2024-04-15 22:45:20.024078] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.374 [2024-04-15 22:45:20.024103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.374 [2024-04-15 22:45:20.040916] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.374 [2024-04-15 22:45:20.040942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.374 [2024-04-15 22:45:20.058276] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.374 [2024-04-15 22:45:20.058298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.374 [2024-04-15 22:45:20.074864] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.374 [2024-04-15 22:45:20.074884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.374 [2024-04-15 22:45:20.091600] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.374 [2024-04-15 22:45:20.091619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.374 [2024-04-15 22:45:20.107967] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.374 [2024-04-15 22:45:20.107984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.374 [2024-04-15 22:45:20.124896] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.374 [2024-04-15 22:45:20.124914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.374 [2024-04-15 22:45:20.141833] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.374 [2024-04-15 22:45:20.141851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.374 [2024-04-15 22:45:20.158525] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.374 [2024-04-15 22:45:20.158548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.374 [2024-04-15 22:45:20.175514] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.374 [2024-04-15 22:45:20.175531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.636 [2024-04-15 22:45:20.192440] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.636 [2024-04-15 22:45:20.192458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.636 [2024-04-15 22:45:20.208895] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.636 [2024-04-15 22:45:20.208913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.636 [2024-04-15 22:45:20.225048] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.636 [2024-04-15 22:45:20.225066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.636 [2024-04-15 22:45:20.242534] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.636 [2024-04-15 22:45:20.242556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.636 [2024-04-15 22:45:20.259721] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.636 [2024-04-15 22:45:20.259740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.636 [2024-04-15 22:45:20.276044] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.636 [2024-04-15 22:45:20.276062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.636 [2024-04-15 22:45:20.293049] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.636 [2024-04-15 22:45:20.293066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.636 [2024-04-15 22:45:20.310064] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.636 [2024-04-15 22:45:20.310082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.636 [2024-04-15 22:45:20.326174] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.636 [2024-04-15 22:45:20.326196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.636 [2024-04-15 22:45:20.343407] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.636 [2024-04-15 22:45:20.343425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.636 [2024-04-15 22:45:20.359668] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.636 [2024-04-15 22:45:20.359686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.636 [2024-04-15 22:45:20.376027] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.636 [2024-04-15 22:45:20.376045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.636 [2024-04-15 22:45:20.392540] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.636 [2024-04-15 22:45:20.392563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.636 [2024-04-15 22:45:20.409097] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.636 [2024-04-15 22:45:20.409114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.636 [2024-04-15 22:45:20.426279] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.636 [2024-04-15 22:45:20.426297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.636 [2024-04-15 22:45:20.442726] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.636 [2024-04-15 22:45:20.442743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.897 [2024-04-15 22:45:20.458764] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.897 [2024-04-15 22:45:20.458782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.897 [2024-04-15 22:45:20.469944] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.897 [2024-04-15 22:45:20.469961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.897 [2024-04-15 22:45:20.486464] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.897 [2024-04-15 22:45:20.486482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.897 [2024-04-15 22:45:20.502382] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.897 [2024-04-15 22:45:20.502399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.897 [2024-04-15 22:45:20.519182] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.897 [2024-04-15 22:45:20.519199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.897 [2024-04-15 22:45:20.536038] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.897 [2024-04-15 22:45:20.536056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.897 [2024-04-15 22:45:20.552951] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.897 [2024-04-15 22:45:20.552968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.897 [2024-04-15 22:45:20.569800] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.897 [2024-04-15 22:45:20.569818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.897 [2024-04-15 22:45:20.586986] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.897 [2024-04-15 22:45:20.587004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.897 [2024-04-15 22:45:20.603401] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.897 [2024-04-15 22:45:20.603419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.897 [2024-04-15 22:45:20.620506] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.897 [2024-04-15 22:45:20.620524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.897 [2024-04-15 22:45:20.636471] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.897 [2024-04-15 22:45:20.636493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.897 [2024-04-15 22:45:20.653412] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.897 [2024-04-15 22:45:20.653429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.897 [2024-04-15 22:45:20.670514] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.897 [2024-04-15 22:45:20.670531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.897 [2024-04-15 22:45:20.687252] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.897 [2024-04-15 22:45:20.687269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.897 [2024-04-15 22:45:20.704358] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.897 [2024-04-15 22:45:20.704376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.159 [2024-04-15 22:45:20.721320] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.159 [2024-04-15 22:45:20.721338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.159 [2024-04-15 22:45:20.738588] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.159 [2024-04-15 22:45:20.738606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.159 [2024-04-15 22:45:20.754270] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.159 [2024-04-15 22:45:20.754287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.159 [2024-04-15 22:45:20.765660] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.159 [2024-04-15 22:45:20.765677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.159 [2024-04-15 22:45:20.781186] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.159 [2024-04-15 22:45:20.781204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.159 [2024-04-15 22:45:20.797815] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.159 [2024-04-15 22:45:20.797832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.159 [2024-04-15 22:45:20.814880] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.159 [2024-04-15 22:45:20.814897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.159 [2024-04-15 22:45:20.831127] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.159 [2024-04-15 22:45:20.831145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.159 [2024-04-15 22:45:20.847384] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.159 [2024-04-15 22:45:20.847402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.159 [2024-04-15 22:45:20.864105] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.159 [2024-04-15 22:45:20.864123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.159 [2024-04-15 22:45:20.880331] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.159 [2024-04-15 22:45:20.880349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.159 [2024-04-15 22:45:20.896639] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.159 [2024-04-15 22:45:20.896656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.159 [2024-04-15 22:45:20.913971] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.159 [2024-04-15 22:45:20.913990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.159 [2024-04-15 22:45:20.929906] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.159 [2024-04-15 22:45:20.929924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.159 [2024-04-15 22:45:20.947213] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.159 [2024-04-15 22:45:20.947239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.159 [2024-04-15 22:45:20.963802] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.159 [2024-04-15 22:45:20.963820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.421 [2024-04-15 22:45:20.980828] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.421 [2024-04-15 22:45:20.980848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.421 [2024-04-15 22:45:20.997810] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.421 [2024-04-15 22:45:20.997827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.421 [2024-04-15 22:45:21.014111] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.421 [2024-04-15 22:45:21.014129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.421 [2024-04-15 22:45:21.030477] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.421 [2024-04-15 22:45:21.030495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.421 [2024-04-15 22:45:21.047400] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.421 [2024-04-15 22:45:21.047418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.421 [2024-04-15 22:45:21.063604] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.421 [2024-04-15 22:45:21.063621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.421 [2024-04-15 22:45:21.080165] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.421 [2024-04-15 22:45:21.080183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.421 [2024-04-15 22:45:21.096763] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.421 [2024-04-15 22:45:21.096781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.421 [2024-04-15 22:45:21.113477] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.421 [2024-04-15 22:45:21.113495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.421 [2024-04-15 22:45:21.130471] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.421 [2024-04-15 22:45:21.130490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.421 [2024-04-15 22:45:21.147001] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.421 [2024-04-15 22:45:21.147019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.421 [2024-04-15 22:45:21.163557] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.421 [2024-04-15 22:45:21.163575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.421 [2024-04-15 22:45:21.181162] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.421 [2024-04-15 22:45:21.181180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.421 [2024-04-15 22:45:21.197900] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.421 [2024-04-15 22:45:21.197917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.421 [2024-04-15 22:45:21.214680] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.421 [2024-04-15 22:45:21.214698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.683 [2024-04-15 22:45:21.231135] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.683 [2024-04-15 22:45:21.231152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.683 [2024-04-15 22:45:21.247871] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.683 [2024-04-15 22:45:21.247889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.683 [2024-04-15 22:45:21.265097] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.683 [2024-04-15 22:45:21.265119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.683 [2024-04-15 22:45:21.281180] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.683 [2024-04-15 22:45:21.281198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.683 [2024-04-15 22:45:21.298136] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.683 [2024-04-15 22:45:21.298154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.683 [2024-04-15 22:45:21.314898] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.683 [2024-04-15 22:45:21.314916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.683 [2024-04-15 22:45:21.331455] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.683 [2024-04-15 22:45:21.331474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.683 [2024-04-15 22:45:21.347911] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.683 [2024-04-15 22:45:21.347930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.683 [2024-04-15 22:45:21.364684] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.683 [2024-04-15 22:45:21.364702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.683 [2024-04-15 22:45:21.381273] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.683 [2024-04-15 22:45:21.381291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.683 [2024-04-15 22:45:21.398364] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.683 [2024-04-15 22:45:21.398382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.683 [2024-04-15 22:45:21.415112] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.683 [2024-04-15 22:45:21.415130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.683 [2024-04-15 22:45:21.431822] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.683 [2024-04-15 22:45:21.431840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.683 [2024-04-15 22:45:21.448440] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.683 [2024-04-15 22:45:21.448457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.683 [2024-04-15 22:45:21.465446] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.683 [2024-04-15 22:45:21.465464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.683 [2024-04-15 22:45:21.482413] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.683 [2024-04-15 22:45:21.482431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.945 [2024-04-15 22:45:21.499052] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.945 [2024-04-15 22:45:21.499070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.945 [2024-04-15 22:45:21.515907] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.945 [2024-04-15 22:45:21.515925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.945 [2024-04-15 22:45:21.532773] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.945 [2024-04-15 22:45:21.532791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.945 [2024-04-15 22:45:21.549231] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.945 [2024-04-15 22:45:21.549248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.945 [2024-04-15 22:45:21.566088] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.945 [2024-04-15 22:45:21.566107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.945 [2024-04-15 22:45:21.582616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.945 [2024-04-15 22:45:21.582635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.945 [2024-04-15 22:45:21.599596] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.945 [2024-04-15 22:45:21.599614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.945 [2024-04-15 22:45:21.616111] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.945 [2024-04-15 22:45:21.616128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.945 [2024-04-15 22:45:21.632423] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.945 [2024-04-15 22:45:21.632440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.945 [2024-04-15 22:45:21.649344] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.945 [2024-04-15 22:45:21.649362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.945 [2024-04-15 22:45:21.666026] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.945 [2024-04-15 22:45:21.666044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.945 [2024-04-15 22:45:21.683006] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.945 [2024-04-15 22:45:21.683023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.945 [2024-04-15 22:45:21.699651] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.945 [2024-04-15 22:45:21.699668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.945 [2024-04-15 22:45:21.716704] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.945 [2024-04-15 22:45:21.716722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.945 [2024-04-15 22:45:21.732687] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.945 [2024-04-15 22:45:21.732704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.945 [2024-04-15 22:45:21.749465] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.945 [2024-04-15 22:45:21.749482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.207 [2024-04-15 22:45:21.766571] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.207 [2024-04-15 22:45:21.766589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.207 [2024-04-15 22:45:21.783461] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.207 [2024-04-15 22:45:21.783479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.207 [2024-04-15 22:45:21.800166] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.207 [2024-04-15 22:45:21.800183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.207 [2024-04-15 22:45:21.817264] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.207 [2024-04-15 22:45:21.817281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.207 [2024-04-15 22:45:21.833402] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.207 [2024-04-15 22:45:21.833419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.207 [2024-04-15 22:45:21.850445] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.207 [2024-04-15 22:45:21.850463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.207 [2024-04-15 22:45:21.866962] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.207 [2024-04-15 22:45:21.866979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.207 [2024-04-15 22:45:21.883746] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.207 [2024-04-15 22:45:21.883763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.207 [2024-04-15 22:45:21.900051] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.207 [2024-04-15 22:45:21.900068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.207 [2024-04-15 22:45:21.916667] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.207 [2024-04-15 22:45:21.916685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.207 [2024-04-15 22:45:21.933463] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.207 [2024-04-15 22:45:21.933482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.207 [2024-04-15 22:45:21.949812] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.207 [2024-04-15 22:45:21.949830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.207 [2024-04-15 22:45:21.960962] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.207 [2024-04-15 22:45:21.960979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.207 [2024-04-15 22:45:21.978011] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.207 [2024-04-15 22:45:21.978028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.207 [2024-04-15 22:45:21.995007] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.207 [2024-04-15 22:45:21.995025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.207 [2024-04-15 22:45:22.010973] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.207 [2024-04-15 22:45:22.010991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.468 [2024-04-15 22:45:22.025893] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.468 [2024-04-15 22:45:22.025911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.468 [2024-04-15 22:45:22.036564] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.468 [2024-04-15 22:45:22.036581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.468 [2024-04-15 22:45:22.052902] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.468 [2024-04-15 22:45:22.052920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.468 [2024-04-15 22:45:22.069757] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.468 [2024-04-15 22:45:22.069774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.468 [2024-04-15 22:45:22.086160] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.468 [2024-04-15 22:45:22.086177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.469 [2024-04-15 22:45:22.102579] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.469 [2024-04-15 22:45:22.102596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.469 [2024-04-15 22:45:22.118788] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.469 [2024-04-15 22:45:22.118805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.469 [2024-04-15 22:45:22.135270] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.469 [2024-04-15 22:45:22.135287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.469 [2024-04-15 22:45:22.151882] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.469 [2024-04-15 22:45:22.151899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.469 [2024-04-15 22:45:22.167794] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.469 [2024-04-15 22:45:22.167811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.469 [2024-04-15 22:45:22.179430] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.469 [2024-04-15 22:45:22.179447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.469 [2024-04-15 22:45:22.196448] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.469 [2024-04-15 22:45:22.196465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.469 [2024-04-15 22:45:22.212428] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.469 [2024-04-15 22:45:22.212445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.469 [2024-04-15 22:45:22.229286] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.469 [2024-04-15 22:45:22.229303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.469 [2024-04-15 22:45:22.246234] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.469 [2024-04-15 22:45:22.246251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.469 [2024-04-15 22:45:22.263222] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.469 [2024-04-15 22:45:22.263239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.728 [2024-04-15 22:45:22.279875] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.728 [2024-04-15 22:45:22.279893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.728 [2024-04-15 22:45:22.296719] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.728 [2024-04-15 22:45:22.296736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.728 [2024-04-15 22:45:22.313574] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.728 [2024-04-15 22:45:22.313592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.728 [2024-04-15 22:45:22.329983] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.728 [2024-04-15 22:45:22.330001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.728 [2024-04-15 22:45:22.346101] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.728 [2024-04-15 22:45:22.346119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.728 [2024-04-15 22:45:22.356736] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.728 [2024-04-15 22:45:22.356754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.728 [2024-04-15 22:45:22.373693] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.728 [2024-04-15 22:45:22.373711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.728 [2024-04-15 22:45:22.389777] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.728 [2024-04-15 22:45:22.389794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.728 [2024-04-15 22:45:22.406767] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.728 [2024-04-15 22:45:22.406784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.728 [2024-04-15 22:45:22.423105] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.728 [2024-04-15 22:45:22.423123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.728 [2024-04-15 22:45:22.439448] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.728 [2024-04-15 22:45:22.439465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.728 [2024-04-15 22:45:22.456070] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.729 [2024-04-15 22:45:22.456087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.729 [2024-04-15 22:45:22.472665] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.729 [2024-04-15 22:45:22.472683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.729 [2024-04-15 22:45:22.489137] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.729 [2024-04-15 22:45:22.489154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.729 [2024-04-15 22:45:22.506285] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.729 [2024-04-15 22:45:22.506302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.729 [2024-04-15 22:45:22.522908] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.729 [2024-04-15 22:45:22.522925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.988 [2024-04-15 22:45:22.539964] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.988 [2024-04-15 22:45:22.539981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.988 [2024-04-15 22:45:22.555894] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.988 [2024-04-15 22:45:22.555911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.988 [2024-04-15 22:45:22.572405] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.988 [2024-04-15 22:45:22.572423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.988 [2024-04-15 22:45:22.583490] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.988 [2024-04-15 22:45:22.583507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.988 [2024-04-15 22:45:22.599739] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.988 [2024-04-15 22:45:22.599756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.988 [2024-04-15 22:45:22.617005] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.988 [2024-04-15 22:45:22.617023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.988 [2024-04-15 22:45:22.633424] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.988 [2024-04-15 22:45:22.633441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.988 [2024-04-15 22:45:22.650387] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.988 [2024-04-15 22:45:22.650404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.988 [2024-04-15 22:45:22.666987] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.988 [2024-04-15 22:45:22.667005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.988 [2024-04-15 22:45:22.683973] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.988 [2024-04-15 22:45:22.683991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.988 [2024-04-15 22:45:22.699992] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.988 [2024-04-15 22:45:22.700010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.988 [2024-04-15 22:45:22.716578] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.988 [2024-04-15 22:45:22.716597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.988 [2024-04-15 22:45:22.733689] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.988 [2024-04-15 22:45:22.733707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.989 [2024-04-15 22:45:22.750464] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.989 [2024-04-15 22:45:22.750481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.989 [2024-04-15 22:45:22.767137] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.989 [2024-04-15 22:45:22.767155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.989 [2024-04-15 22:45:22.784163] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.989 [2024-04-15 22:45:22.784181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.249 [2024-04-15 22:45:22.801252] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.249 [2024-04-15 22:45:22.801274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.249 [2024-04-15 22:45:22.817199] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.249 [2024-04-15 22:45:22.817217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.249 [2024-04-15 22:45:22.833856] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.249 [2024-04-15 22:45:22.833873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.249 [2024-04-15 22:45:22.850472] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.249 [2024-04-15 22:45:22.850490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.249 [2024-04-15 22:45:22.866699] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.249 [2024-04-15 22:45:22.866717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.249 [2024-04-15 22:45:22.884095] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.249 [2024-04-15 22:45:22.884113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.249 [2024-04-15 22:45:22.900499] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.249 [2024-04-15 22:45:22.900517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.249 [2024-04-15 22:45:22.917788] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.249 [2024-04-15 22:45:22.917806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.249 [2024-04-15 22:45:22.933762] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.250 [2024-04-15 22:45:22.933780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.250 [2024-04-15 22:45:22.951012] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.250 [2024-04-15 22:45:22.951030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.250 [2024-04-15 22:45:22.966849] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.250 [2024-04-15 22:45:22.966867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.250 [2024-04-15 22:45:22.983143] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.250 [2024-04-15 22:45:22.983160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.250 [2024-04-15 22:45:22.999712] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.250 [2024-04-15 22:45:22.999730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.250 [2024-04-15 22:45:23.011043] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.250 [2024-04-15 22:45:23.011060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.250 [2024-04-15 22:45:23.027227] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.250 [2024-04-15 22:45:23.027245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.250 [2024-04-15 22:45:23.044182] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.250 [2024-04-15 22:45:23.044199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.510 [2024-04-15 22:45:23.060477] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.510 [2024-04-15 22:45:23.060494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.510 [2024-04-15 22:45:23.077852] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.510 [2024-04-15 22:45:23.077871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.510 [2024-04-15 22:45:23.094074] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.510 [2024-04-15 22:45:23.094092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.510 [2024-04-15 22:45:23.110936] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.510 [2024-04-15 22:45:23.110958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.510 [2024-04-15 22:45:23.127613] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.510 [2024-04-15 22:45:23.127631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.510 [2024-04-15 22:45:23.144473] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.510 [2024-04-15 22:45:23.144491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.510 [2024-04-15 22:45:23.161887] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.510 [2024-04-15 22:45:23.161905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.510 [2024-04-15 22:45:23.178577] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.510 [2024-04-15 22:45:23.178595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.510 [2024-04-15 22:45:23.195927] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.510 [2024-04-15 22:45:23.195945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.510 [2024-04-15 22:45:23.212536] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.510 [2024-04-15 22:45:23.212559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.510 [2024-04-15 22:45:23.229039] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.510 [2024-04-15 22:45:23.229057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.510 [2024-04-15 22:45:23.245816] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.510 [2024-04-15 22:45:23.245833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.510 [2024-04-15 22:45:23.262379] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.510 [2024-04-15 22:45:23.262397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.510 [2024-04-15 22:45:23.279401] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.511 [2024-04-15 22:45:23.279419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.511 [2024-04-15 22:45:23.295950] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.511 [2024-04-15 22:45:23.295968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.511 [2024-04-15 22:45:23.313037] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.511 [2024-04-15 22:45:23.313054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.771 [2024-04-15 22:45:23.329406] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.771 [2024-04-15 22:45:23.329424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.771 [2024-04-15 22:45:23.340522] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.771 [2024-04-15 22:45:23.340541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.771 [2024-04-15 22:45:23.356737] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.772 [2024-04-15 22:45:23.356754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.772 [2024-04-15 22:45:23.373518] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.772 [2024-04-15 22:45:23.373536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.772 [2024-04-15 22:45:23.390319] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.772 [2024-04-15 22:45:23.390337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.772 [2024-04-15 22:45:23.403490] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.772 [2024-04-15 22:45:23.403509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.772 00:18:38.772 Latency(us) 00:18:38.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.772 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:38.772 Nvme1n1 : 5.01 14069.20 109.92 0.00 0.00 9087.06 4123.31 18786.99 00:18:38.772 =================================================================================================================== 00:18:38.772 Total : 14069.20 109.92 0.00 0.00 9087.06 4123.31 18786.99 00:18:38.772 [2024-04-15 22:45:23.414062] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.772 [2024-04-15 22:45:23.414076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.772 [2024-04-15 22:45:23.426097] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.772 [2024-04-15 22:45:23.426112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.772 [2024-04-15 22:45:23.438128] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.772 [2024-04-15 22:45:23.438142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.772 [2024-04-15 22:45:23.450158] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.772 [2024-04-15 22:45:23.450173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.772 [2024-04-15 22:45:23.462190] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.772 [2024-04-15 22:45:23.462203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.772 [2024-04-15 22:45:23.474219] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.772 [2024-04-15 22:45:23.474230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.772 [2024-04-15 22:45:23.486250] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.772 [2024-04-15 22:45:23.486260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.772 [2024-04-15 22:45:23.498283] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.772 [2024-04-15 22:45:23.498296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.772 [2024-04-15 22:45:23.510315] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.772 [2024-04-15 22:45:23.510327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.772 [2024-04-15 22:45:23.522348] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.772 [2024-04-15 22:45:23.522361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.772 [2024-04-15 22:45:23.534381] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.772 [2024-04-15 22:45:23.534390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1110683) - No such process 00:18:38.772 22:45:23 -- target/zcopy.sh@49 -- # wait 1110683 00:18:38.772 22:45:23 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:38.772 22:45:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.772 22:45:23 -- common/autotest_common.sh@10 -- # set +x 00:18:38.772 22:45:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.772 22:45:23 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:38.772 22:45:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.772 22:45:23 -- common/autotest_common.sh@10 -- # set +x 00:18:38.772 delay0 00:18:38.772 22:45:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.772 22:45:23 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:38.772 22:45:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.772 22:45:23 -- common/autotest_common.sh@10 -- # set +x 00:18:38.772 22:45:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.772 22:45:23 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:39.032 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.032 [2024-04-15 22:45:23.631082] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:45.662 Initializing NVMe Controllers 00:18:45.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:45.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:45.662 Initialization complete. Launching workers. 00:18:45.662 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 162 00:18:45.662 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 431, failed to submit 51 00:18:45.662 success 249, unsuccess 182, failed 0 00:18:45.662 22:45:29 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:45.662 22:45:29 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:45.662 22:45:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:45.662 22:45:29 -- nvmf/common.sh@116 -- # sync 00:18:45.662 22:45:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:45.662 22:45:29 -- nvmf/common.sh@119 -- # set +e 00:18:45.662 22:45:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:45.662 22:45:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:45.662 rmmod nvme_tcp 00:18:45.662 rmmod nvme_fabrics 00:18:45.662 rmmod nvme_keyring 00:18:45.662 22:45:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:45.662 22:45:29 -- nvmf/common.sh@123 -- # set -e 00:18:45.662 22:45:29 -- nvmf/common.sh@124 -- # return 0 00:18:45.662 22:45:29 -- nvmf/common.sh@477 -- # '[' -n 1107966 ']' 00:18:45.662 22:45:29 -- nvmf/common.sh@478 -- # killprocess 1107966 00:18:45.663 22:45:29 -- common/autotest_common.sh@926 -- # '[' -z 1107966 ']' 00:18:45.663 22:45:29 -- common/autotest_common.sh@930 -- # kill -0 1107966 00:18:45.663 22:45:29 -- common/autotest_common.sh@931 -- # uname 00:18:45.663 22:45:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:45.663 22:45:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1107966 00:18:45.663 22:45:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:45.663 22:45:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:45.663 22:45:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1107966' 00:18:45.663 killing process with pid 1107966 00:18:45.663 22:45:30 -- common/autotest_common.sh@945 -- # kill 1107966 00:18:45.663 22:45:30 -- common/autotest_common.sh@950 -- # wait 1107966 00:18:45.663 22:45:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:45.663 22:45:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:45.663 22:45:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:45.663 22:45:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:45.663 22:45:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:45.663 22:45:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.663 22:45:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.663 22:45:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.577 22:45:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:47.577 00:18:47.577 real 0m34.106s 00:18:47.577 user 0m45.065s 00:18:47.577 sys 0m10.575s 00:18:47.577 22:45:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:47.577 22:45:32 -- common/autotest_common.sh@10 -- # set +x 00:18:47.577 ************************************ 00:18:47.577 END TEST nvmf_zcopy 00:18:47.577 ************************************ 00:18:47.577 22:45:32 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:47.577 22:45:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:47.577 22:45:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:47.577 22:45:32 -- common/autotest_common.sh@10 -- # set +x 00:18:47.577 ************************************ 00:18:47.577 START TEST nvmf_nmic 00:18:47.577 ************************************ 00:18:47.577 22:45:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:47.577 * Looking for test storage... 00:18:47.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:47.838 22:45:32 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.838 22:45:32 -- nvmf/common.sh@7 -- # uname -s 00:18:47.838 22:45:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.838 22:45:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.838 22:45:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.838 22:45:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.838 22:45:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.838 22:45:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.838 22:45:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.838 22:45:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.838 22:45:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.838 22:45:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.838 22:45:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:47.838 22:45:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:47.838 22:45:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.838 22:45:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.838 22:45:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.838 22:45:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:47.838 22:45:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.839 22:45:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.839 22:45:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.839 22:45:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.839 22:45:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.839 22:45:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.839 22:45:32 -- paths/export.sh@5 -- # export PATH 00:18:47.839 22:45:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.839 22:45:32 -- nvmf/common.sh@46 -- # : 0 00:18:47.839 22:45:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:47.839 22:45:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:47.839 22:45:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:47.839 22:45:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.839 22:45:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.839 22:45:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:47.839 22:45:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:47.839 22:45:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:47.839 22:45:32 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:47.839 22:45:32 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:47.839 22:45:32 -- target/nmic.sh@14 -- # nvmftestinit 00:18:47.839 22:45:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:47.839 22:45:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.839 22:45:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:47.839 22:45:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:47.839 22:45:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:47.839 22:45:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.839 22:45:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.839 22:45:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.839 22:45:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:47.839 22:45:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:47.839 22:45:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:47.839 22:45:32 -- common/autotest_common.sh@10 -- # set +x 00:18:56.005 22:45:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:56.005 22:45:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:56.005 22:45:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:56.005 22:45:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:56.005 22:45:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:56.005 22:45:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:56.005 22:45:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:56.005 22:45:40 -- nvmf/common.sh@294 -- # net_devs=() 00:18:56.005 22:45:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:56.005 22:45:40 -- nvmf/common.sh@295 -- # e810=() 00:18:56.005 22:45:40 -- nvmf/common.sh@295 -- # local -ga e810 00:18:56.005 22:45:40 -- nvmf/common.sh@296 -- # x722=() 00:18:56.005 22:45:40 -- nvmf/common.sh@296 -- # local -ga x722 00:18:56.005 22:45:40 -- nvmf/common.sh@297 -- # mlx=() 00:18:56.005 22:45:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:56.005 22:45:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:56.005 22:45:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:56.005 22:45:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:56.005 22:45:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:56.005 22:45:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:56.005 22:45:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:56.005 22:45:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:56.005 22:45:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:56.005 22:45:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:56.005 22:45:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:56.005 22:45:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:56.005 22:45:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:56.005 22:45:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:56.005 22:45:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:56.005 22:45:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:56.005 22:45:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:56.005 22:45:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:56.005 22:45:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:56.005 22:45:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:56.005 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:56.005 22:45:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:56.005 22:45:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:56.005 22:45:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.005 22:45:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.005 22:45:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:56.005 22:45:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:56.005 22:45:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:56.005 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:56.005 22:45:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:56.005 22:45:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:56.005 22:45:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.005 22:45:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.005 22:45:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:56.005 22:45:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:56.005 22:45:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:56.005 22:45:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:56.005 22:45:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:56.005 22:45:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.005 22:45:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:56.005 22:45:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.005 22:45:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:56.005 Found net devices under 0000:31:00.0: cvl_0_0 00:18:56.005 22:45:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.005 22:45:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:56.005 22:45:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.005 22:45:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:56.005 22:45:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.005 22:45:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:56.005 Found net devices under 0000:31:00.1: cvl_0_1 00:18:56.005 22:45:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.005 22:45:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:56.005 22:45:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:56.005 22:45:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:56.005 22:45:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:56.005 22:45:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:56.005 22:45:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:56.005 22:45:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:56.005 22:45:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:56.005 22:45:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:56.005 22:45:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:56.005 22:45:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:56.005 22:45:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:56.005 22:45:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:56.005 22:45:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:56.005 22:45:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:56.005 22:45:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:56.005 22:45:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:56.005 22:45:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:56.005 22:45:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:56.005 22:45:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:56.005 22:45:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:56.005 22:45:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:56.005 22:45:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:56.005 22:45:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:56.005 22:45:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:56.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:56.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:18:56.005 00:18:56.005 --- 10.0.0.2 ping statistics --- 00:18:56.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.005 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:18:56.006 22:45:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:56.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:56.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:18:56.006 00:18:56.006 --- 10.0.0.1 ping statistics --- 00:18:56.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.006 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:18:56.006 22:45:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:56.006 22:45:40 -- nvmf/common.sh@410 -- # return 0 00:18:56.006 22:45:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:56.006 22:45:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:56.006 22:45:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:56.006 22:45:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:56.006 22:45:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:56.006 22:45:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:56.006 22:45:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:56.006 22:45:40 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:56.006 22:45:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:56.006 22:45:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:56.006 22:45:40 -- common/autotest_common.sh@10 -- # set +x 00:18:56.006 22:45:40 -- nvmf/common.sh@469 -- # nvmfpid=1117603 00:18:56.006 22:45:40 -- nvmf/common.sh@470 -- # waitforlisten 1117603 00:18:56.006 22:45:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:56.006 22:45:40 -- common/autotest_common.sh@819 -- # '[' -z 1117603 ']' 00:18:56.006 22:45:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.006 22:45:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:56.006 22:45:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.006 22:45:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:56.006 22:45:40 -- common/autotest_common.sh@10 -- # set +x 00:18:56.006 [2024-04-15 22:45:40.442514] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:56.006 [2024-04-15 22:45:40.442593] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.006 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.006 [2024-04-15 22:45:40.523927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:56.006 [2024-04-15 22:45:40.601927] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:56.006 [2024-04-15 22:45:40.602071] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.006 [2024-04-15 22:45:40.602082] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.006 [2024-04-15 22:45:40.602090] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.006 [2024-04-15 22:45:40.602239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.006 [2024-04-15 22:45:40.602380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.006 [2024-04-15 22:45:40.602550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.006 [2024-04-15 22:45:40.602560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:56.577 22:45:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:56.577 22:45:41 -- common/autotest_common.sh@852 -- # return 0 00:18:56.577 22:45:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:56.577 22:45:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:56.577 22:45:41 -- common/autotest_common.sh@10 -- # set +x 00:18:56.577 22:45:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.577 22:45:41 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:56.577 22:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.577 22:45:41 -- common/autotest_common.sh@10 -- # set +x 00:18:56.577 [2024-04-15 22:45:41.264582] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.577 22:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.577 22:45:41 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:56.577 22:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.577 22:45:41 -- common/autotest_common.sh@10 -- # set +x 00:18:56.577 Malloc0 00:18:56.577 22:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.577 22:45:41 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:56.577 22:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.577 22:45:41 -- common/autotest_common.sh@10 -- # set +x 00:18:56.577 22:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.577 22:45:41 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:56.577 22:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.577 22:45:41 -- common/autotest_common.sh@10 -- # set +x 00:18:56.577 22:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.577 22:45:41 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:56.577 22:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.577 22:45:41 -- common/autotest_common.sh@10 -- # set +x 00:18:56.577 [2024-04-15 22:45:41.324001] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.577 22:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.577 22:45:41 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:56.577 test case1: single bdev can't be used in multiple subsystems 00:18:56.577 22:45:41 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:56.577 22:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.577 22:45:41 -- common/autotest_common.sh@10 -- # set +x 00:18:56.577 22:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.577 22:45:41 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:56.577 22:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.577 22:45:41 -- common/autotest_common.sh@10 -- # set +x 00:18:56.577 22:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.577 22:45:41 -- target/nmic.sh@28 -- # nmic_status=0 00:18:56.577 22:45:41 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:56.577 22:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.577 22:45:41 -- common/autotest_common.sh@10 -- # set +x 00:18:56.577 [2024-04-15 22:45:41.359957] bdev.c:7935:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:56.577 [2024-04-15 22:45:41.359974] subsystem.c:1779:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:56.577 [2024-04-15 22:45:41.359982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.577 request: 00:18:56.577 { 00:18:56.577 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:56.577 "namespace": { 00:18:56.577 "bdev_name": "Malloc0" 00:18:56.577 }, 00:18:56.577 "method": "nvmf_subsystem_add_ns", 00:18:56.577 "req_id": 1 00:18:56.577 } 00:18:56.577 Got JSON-RPC error response 00:18:56.577 response: 00:18:56.577 { 00:18:56.577 "code": -32602, 00:18:56.577 "message": "Invalid parameters" 00:18:56.577 } 00:18:56.577 22:45:41 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:18:56.577 22:45:41 -- target/nmic.sh@29 -- # nmic_status=1 00:18:56.577 22:45:41 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:56.577 22:45:41 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:56.577 Adding namespace failed - expected result. 00:18:56.577 22:45:41 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:56.578 test case2: host connect to nvmf target in multiple paths 00:18:56.578 22:45:41 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:56.578 22:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.578 22:45:41 -- common/autotest_common.sh@10 -- # set +x 00:18:56.578 [2024-04-15 22:45:41.372088] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:56.578 22:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.578 22:45:41 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:58.489 22:45:42 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:59.872 22:45:44 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:59.872 22:45:44 -- common/autotest_common.sh@1177 -- # local i=0 00:18:59.872 22:45:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:59.872 22:45:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:59.872 22:45:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:01.787 22:45:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:01.787 22:45:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:01.787 22:45:46 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:01.787 22:45:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:01.787 22:45:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:01.787 22:45:46 -- common/autotest_common.sh@1187 -- # return 0 00:19:01.787 22:45:46 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:01.787 [global] 00:19:01.787 thread=1 00:19:01.787 invalidate=1 00:19:01.787 rw=write 00:19:01.787 time_based=1 00:19:01.787 runtime=1 00:19:01.787 ioengine=libaio 00:19:01.787 direct=1 00:19:01.787 bs=4096 00:19:01.787 iodepth=1 00:19:01.787 norandommap=0 00:19:01.787 numjobs=1 00:19:01.787 00:19:01.787 verify_dump=1 00:19:01.787 verify_backlog=512 00:19:01.787 verify_state_save=0 00:19:01.787 do_verify=1 00:19:01.787 verify=crc32c-intel 00:19:01.787 [job0] 00:19:01.787 filename=/dev/nvme0n1 00:19:01.787 Could not set queue depth (nvme0n1) 00:19:02.047 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.047 fio-3.35 00:19:02.047 Starting 1 thread 00:19:03.431 00:19:03.431 job0: (groupid=0, jobs=1): err= 0: pid=1119107: Mon Apr 15 22:45:48 2024 00:19:03.431 read: IOPS=15, BW=62.4KiB/s (63.9kB/s)(64.0KiB/1025msec) 00:19:03.431 slat (nsec): min=24916, max=26999, avg=25534.00, stdev=580.16 00:19:03.431 clat (usec): min=1103, max=43005, avg=39491.50, stdev=10244.64 00:19:03.431 lat (usec): min=1128, max=43030, avg=39517.04, stdev=10244.74 00:19:03.431 clat percentiles (usec): 00:19:03.431 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[41157], 20.00th=[41681], 00:19:03.431 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:03.431 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:19:03.431 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:19:03.431 | 99.99th=[43254] 00:19:03.431 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:19:03.431 slat (usec): min=9, max=22606, avg=72.93, stdev=997.86 00:19:03.431 clat (usec): min=308, max=894, avg=679.58, stdev=97.02 00:19:03.431 lat (usec): min=318, max=23378, avg=752.51, stdev=1007.00 00:19:03.431 clat percentiles (usec): 00:19:03.431 | 1.00th=[ 404], 5.00th=[ 498], 10.00th=[ 545], 20.00th=[ 603], 00:19:03.431 | 30.00th=[ 644], 40.00th=[ 668], 50.00th=[ 685], 60.00th=[ 709], 00:19:03.431 | 70.00th=[ 734], 80.00th=[ 766], 90.00th=[ 791], 95.00th=[ 816], 00:19:03.431 | 99.00th=[ 857], 99.50th=[ 865], 99.90th=[ 898], 99.95th=[ 898], 00:19:03.431 | 99.99th=[ 898] 00:19:03.431 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:03.431 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:03.431 lat (usec) : 500=5.87%, 750=65.34%, 1000=25.76% 00:19:03.431 lat (msec) : 2=0.19%, 50=2.84% 00:19:03.431 cpu : usr=0.98%, sys=1.17%, ctx=532, majf=0, minf=1 00:19:03.431 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.431 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.431 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:03.431 00:19:03.431 Run status group 0 (all jobs): 00:19:03.431 READ: bw=62.4KiB/s (63.9kB/s), 62.4KiB/s-62.4KiB/s (63.9kB/s-63.9kB/s), io=64.0KiB (65.5kB), run=1025-1025msec 00:19:03.431 WRITE: bw=1998KiB/s (2046kB/s), 1998KiB/s-1998KiB/s (2046kB/s-2046kB/s), io=2048KiB (2097kB), run=1025-1025msec 00:19:03.431 00:19:03.431 Disk stats (read/write): 00:19:03.431 nvme0n1: ios=38/512, merge=0/0, ticks=1466/335, in_queue=1801, util=98.40% 00:19:03.431 22:45:48 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:03.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:03.431 22:45:48 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:03.431 22:45:48 -- common/autotest_common.sh@1198 -- # local i=0 00:19:03.431 22:45:48 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:03.431 22:45:48 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:03.431 22:45:48 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:03.431 22:45:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:03.431 22:45:48 -- common/autotest_common.sh@1210 -- # return 0 00:19:03.431 22:45:48 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:03.431 22:45:48 -- target/nmic.sh@53 -- # nvmftestfini 00:19:03.431 22:45:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:03.431 22:45:48 -- nvmf/common.sh@116 -- # sync 00:19:03.431 22:45:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:03.431 22:45:48 -- nvmf/common.sh@119 -- # set +e 00:19:03.431 22:45:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:03.431 22:45:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:03.431 rmmod nvme_tcp 00:19:03.431 rmmod nvme_fabrics 00:19:03.431 rmmod nvme_keyring 00:19:03.431 22:45:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:03.431 22:45:48 -- nvmf/common.sh@123 -- # set -e 00:19:03.431 22:45:48 -- nvmf/common.sh@124 -- # return 0 00:19:03.431 22:45:48 -- nvmf/common.sh@477 -- # '[' -n 1117603 ']' 00:19:03.431 22:45:48 -- nvmf/common.sh@478 -- # killprocess 1117603 00:19:03.431 22:45:48 -- common/autotest_common.sh@926 -- # '[' -z 1117603 ']' 00:19:03.431 22:45:48 -- common/autotest_common.sh@930 -- # kill -0 1117603 00:19:03.431 22:45:48 -- common/autotest_common.sh@931 -- # uname 00:19:03.692 22:45:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:03.692 22:45:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1117603 00:19:03.692 22:45:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:03.692 22:45:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:03.692 22:45:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1117603' 00:19:03.692 killing process with pid 1117603 00:19:03.692 22:45:48 -- common/autotest_common.sh@945 -- # kill 1117603 00:19:03.692 22:45:48 -- common/autotest_common.sh@950 -- # wait 1117603 00:19:03.692 22:45:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:03.692 22:45:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:03.692 22:45:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:03.692 22:45:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:03.692 22:45:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:03.692 22:45:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.692 22:45:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.692 22:45:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.237 22:45:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:06.237 00:19:06.237 real 0m18.216s 00:19:06.237 user 0m49.556s 00:19:06.237 sys 0m6.727s 00:19:06.237 22:45:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:06.237 22:45:50 -- common/autotest_common.sh@10 -- # set +x 00:19:06.237 ************************************ 00:19:06.237 END TEST nvmf_nmic 00:19:06.237 ************************************ 00:19:06.237 22:45:50 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:06.237 22:45:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:06.237 22:45:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:06.237 22:45:50 -- common/autotest_common.sh@10 -- # set +x 00:19:06.237 ************************************ 00:19:06.237 START TEST nvmf_fio_target 00:19:06.237 ************************************ 00:19:06.237 22:45:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:06.237 * Looking for test storage... 00:19:06.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:06.237 22:45:50 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:06.237 22:45:50 -- nvmf/common.sh@7 -- # uname -s 00:19:06.237 22:45:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.237 22:45:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.237 22:45:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.237 22:45:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.237 22:45:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.237 22:45:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.237 22:45:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.237 22:45:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.237 22:45:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.237 22:45:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.237 22:45:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:06.237 22:45:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:06.237 22:45:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.237 22:45:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.237 22:45:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:06.237 22:45:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:06.237 22:45:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.237 22:45:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.237 22:45:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.237 22:45:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.237 22:45:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.237 22:45:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.237 22:45:50 -- paths/export.sh@5 -- # export PATH 00:19:06.237 22:45:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.237 22:45:50 -- nvmf/common.sh@46 -- # : 0 00:19:06.237 22:45:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:06.237 22:45:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:06.237 22:45:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:06.237 22:45:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.237 22:45:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.237 22:45:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:06.237 22:45:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:06.238 22:45:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:06.238 22:45:50 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:06.238 22:45:50 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:06.238 22:45:50 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:06.238 22:45:50 -- target/fio.sh@16 -- # nvmftestinit 00:19:06.238 22:45:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:06.238 22:45:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.238 22:45:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:06.238 22:45:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:06.238 22:45:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:06.238 22:45:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.238 22:45:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.238 22:45:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.238 22:45:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:06.238 22:45:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:06.238 22:45:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:06.238 22:45:50 -- common/autotest_common.sh@10 -- # set +x 00:19:14.380 22:45:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:14.380 22:45:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:14.380 22:45:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:14.380 22:45:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:14.380 22:45:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:14.380 22:45:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:14.380 22:45:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:14.380 22:45:58 -- nvmf/common.sh@294 -- # net_devs=() 00:19:14.380 22:45:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:14.380 22:45:58 -- nvmf/common.sh@295 -- # e810=() 00:19:14.380 22:45:58 -- nvmf/common.sh@295 -- # local -ga e810 00:19:14.380 22:45:58 -- nvmf/common.sh@296 -- # x722=() 00:19:14.380 22:45:58 -- nvmf/common.sh@296 -- # local -ga x722 00:19:14.380 22:45:58 -- nvmf/common.sh@297 -- # mlx=() 00:19:14.380 22:45:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:14.380 22:45:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.380 22:45:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.380 22:45:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.380 22:45:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.380 22:45:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.380 22:45:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.380 22:45:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.380 22:45:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.380 22:45:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.380 22:45:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.380 22:45:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.380 22:45:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:14.380 22:45:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:14.380 22:45:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:14.380 22:45:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:14.380 22:45:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:14.380 22:45:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:14.380 22:45:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:14.380 22:45:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:14.380 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:14.380 22:45:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:14.380 22:45:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:14.380 22:45:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.380 22:45:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.380 22:45:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:14.380 22:45:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:14.380 22:45:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:14.380 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:14.380 22:45:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:14.380 22:45:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:14.380 22:45:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.380 22:45:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.380 22:45:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:14.380 22:45:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:14.380 22:45:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:14.380 22:45:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:14.380 22:45:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:14.380 22:45:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.380 22:45:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:14.380 22:45:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.380 22:45:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:14.380 Found net devices under 0000:31:00.0: cvl_0_0 00:19:14.380 22:45:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.380 22:45:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:14.380 22:45:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.380 22:45:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:14.380 22:45:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.380 22:45:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:14.380 Found net devices under 0000:31:00.1: cvl_0_1 00:19:14.380 22:45:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.380 22:45:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:14.380 22:45:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:14.380 22:45:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:14.380 22:45:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:14.380 22:45:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:14.380 22:45:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.380 22:45:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.380 22:45:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.380 22:45:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:14.380 22:45:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.380 22:45:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.380 22:45:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:14.380 22:45:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.380 22:45:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.380 22:45:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:14.380 22:45:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:14.381 22:45:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.381 22:45:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.381 22:45:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.381 22:45:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.381 22:45:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:14.381 22:45:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.381 22:45:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.381 22:45:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.381 22:45:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:14.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.535 ms 00:19:14.381 00:19:14.381 --- 10.0.0.2 ping statistics --- 00:19:14.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.381 rtt min/avg/max/mdev = 0.535/0.535/0.535/0.000 ms 00:19:14.381 22:45:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:19:14.381 00:19:14.381 --- 10.0.0.1 ping statistics --- 00:19:14.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.381 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:19:14.381 22:45:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.381 22:45:58 -- nvmf/common.sh@410 -- # return 0 00:19:14.381 22:45:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:14.381 22:45:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.381 22:45:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:14.381 22:45:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:14.381 22:45:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.381 22:45:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:14.381 22:45:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:14.381 22:45:58 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:14.381 22:45:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:14.381 22:45:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:14.381 22:45:58 -- common/autotest_common.sh@10 -- # set +x 00:19:14.381 22:45:58 -- nvmf/common.sh@469 -- # nvmfpid=1124144 00:19:14.381 22:45:58 -- nvmf/common.sh@470 -- # waitforlisten 1124144 00:19:14.381 22:45:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:14.381 22:45:58 -- common/autotest_common.sh@819 -- # '[' -z 1124144 ']' 00:19:14.381 22:45:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.381 22:45:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:14.381 22:45:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.381 22:45:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:14.381 22:45:58 -- common/autotest_common.sh@10 -- # set +x 00:19:14.381 [2024-04-15 22:45:58.890606] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:14.381 [2024-04-15 22:45:58.890669] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.381 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.381 [2024-04-15 22:45:58.968952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:14.381 [2024-04-15 22:45:59.041241] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:14.381 [2024-04-15 22:45:59.041378] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.381 [2024-04-15 22:45:59.041387] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.381 [2024-04-15 22:45:59.041399] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.381 [2024-04-15 22:45:59.041569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.381 [2024-04-15 22:45:59.041652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.381 [2024-04-15 22:45:59.041803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.381 [2024-04-15 22:45:59.041803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:14.953 22:45:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:14.953 22:45:59 -- common/autotest_common.sh@852 -- # return 0 00:19:14.953 22:45:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:14.953 22:45:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:14.953 22:45:59 -- common/autotest_common.sh@10 -- # set +x 00:19:14.953 22:45:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.953 22:45:59 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:15.214 [2024-04-15 22:45:59.852254] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.214 22:45:59 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:15.475 22:46:00 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:15.475 22:46:00 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:15.475 22:46:00 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:15.475 22:46:00 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:15.735 22:46:00 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:15.735 22:46:00 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:15.995 22:46:00 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:15.995 22:46:00 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:15.995 22:46:00 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:16.255 22:46:00 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:16.255 22:46:00 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:16.516 22:46:01 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:16.516 22:46:01 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:16.821 22:46:01 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:16.822 22:46:01 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:16.822 22:46:01 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:17.126 22:46:01 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:17.126 22:46:01 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:17.126 22:46:01 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:17.126 22:46:01 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:17.388 22:46:02 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:17.388 [2024-04-15 22:46:02.150680] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.388 22:46:02 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:17.648 22:46:02 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:17.910 22:46:02 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:19.298 22:46:04 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:19.298 22:46:04 -- common/autotest_common.sh@1177 -- # local i=0 00:19:19.298 22:46:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:19.298 22:46:04 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:19:19.298 22:46:04 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:19:19.298 22:46:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:21.844 22:46:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:21.844 22:46:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:21.844 22:46:06 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:21.844 22:46:06 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:19:21.844 22:46:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:21.844 22:46:06 -- common/autotest_common.sh@1187 -- # return 0 00:19:21.844 22:46:06 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:21.844 [global] 00:19:21.844 thread=1 00:19:21.844 invalidate=1 00:19:21.844 rw=write 00:19:21.844 time_based=1 00:19:21.844 runtime=1 00:19:21.844 ioengine=libaio 00:19:21.844 direct=1 00:19:21.844 bs=4096 00:19:21.844 iodepth=1 00:19:21.844 norandommap=0 00:19:21.844 numjobs=1 00:19:21.844 00:19:21.844 verify_dump=1 00:19:21.844 verify_backlog=512 00:19:21.844 verify_state_save=0 00:19:21.844 do_verify=1 00:19:21.844 verify=crc32c-intel 00:19:21.844 [job0] 00:19:21.844 filename=/dev/nvme0n1 00:19:21.844 [job1] 00:19:21.844 filename=/dev/nvme0n2 00:19:21.844 [job2] 00:19:21.844 filename=/dev/nvme0n3 00:19:21.844 [job3] 00:19:21.844 filename=/dev/nvme0n4 00:19:21.844 Could not set queue depth (nvme0n1) 00:19:21.844 Could not set queue depth (nvme0n2) 00:19:21.844 Could not set queue depth (nvme0n3) 00:19:21.844 Could not set queue depth (nvme0n4) 00:19:21.844 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.844 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.844 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.844 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.844 fio-3.35 00:19:21.844 Starting 4 threads 00:19:23.229 00:19:23.229 job0: (groupid=0, jobs=1): err= 0: pid=1125782: Mon Apr 15 22:46:07 2024 00:19:23.229 read: IOPS=15, BW=62.9KiB/s (64.4kB/s)(64.0KiB/1018msec) 00:19:23.229 slat (nsec): min=24572, max=25052, avg=24712.44, stdev=119.62 00:19:23.229 clat (usec): min=1144, max=42138, avg=39404.18, stdev=10202.96 00:19:23.229 lat (usec): min=1169, max=42162, avg=39428.89, stdev=10202.95 00:19:23.229 clat percentiles (usec): 00:19:23.229 | 1.00th=[ 1139], 5.00th=[ 1139], 10.00th=[41681], 20.00th=[41681], 00:19:23.229 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:19:23.229 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:23.229 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:23.229 | 99.99th=[42206] 00:19:23.229 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:19:23.229 slat (nsec): min=9397, max=68290, avg=28761.01, stdev=9406.95 00:19:23.229 clat (usec): min=333, max=1346, avg=720.21, stdev=117.75 00:19:23.229 lat (usec): min=343, max=1378, avg=748.97, stdev=121.01 00:19:23.229 clat percentiles (usec): 00:19:23.229 | 1.00th=[ 383], 5.00th=[ 515], 10.00th=[ 570], 20.00th=[ 627], 00:19:23.229 | 30.00th=[ 676], 40.00th=[ 701], 50.00th=[ 725], 60.00th=[ 742], 00:19:23.229 | 70.00th=[ 775], 80.00th=[ 816], 90.00th=[ 865], 95.00th=[ 889], 00:19:23.229 | 99.00th=[ 955], 99.50th=[ 996], 99.90th=[ 1352], 99.95th=[ 1352], 00:19:23.229 | 99.99th=[ 1352] 00:19:23.229 bw ( KiB/s): min= 4096, max= 4096, per=50.39%, avg=4096.00, stdev= 0.00, samples=1 00:19:23.229 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:23.229 lat (usec) : 500=3.98%, 750=55.30%, 1000=37.31% 00:19:23.229 lat (msec) : 2=0.57%, 50=2.84% 00:19:23.229 cpu : usr=0.69%, sys=1.38%, ctx=528, majf=0, minf=1 00:19:23.229 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.229 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.229 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.229 job1: (groupid=0, jobs=1): err= 0: pid=1125791: Mon Apr 15 22:46:07 2024 00:19:23.229 read: IOPS=17, BW=70.4KiB/s (72.1kB/s)(72.0KiB/1022msec) 00:19:23.229 slat (nsec): min=10022, max=26759, avg=25215.83, stdev=3798.95 00:19:23.229 clat (usec): min=40899, max=42021, avg=41618.46, stdev=478.11 00:19:23.229 lat (usec): min=40925, max=42047, avg=41643.67, stdev=477.90 00:19:23.229 clat percentiles (usec): 00:19:23.229 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:23.229 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:19:23.229 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:23.229 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:23.229 | 99.99th=[42206] 00:19:23.229 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:19:23.229 slat (nsec): min=9627, max=66893, avg=27225.83, stdev=11823.05 00:19:23.229 clat (usec): min=96, max=2785, avg=496.01, stdev=150.79 00:19:23.229 lat (usec): min=107, max=2795, avg=523.23, stdev=154.28 00:19:23.229 clat percentiles (usec): 00:19:23.229 | 1.00th=[ 194], 5.00th=[ 273], 10.00th=[ 318], 20.00th=[ 416], 00:19:23.229 | 30.00th=[ 445], 40.00th=[ 482], 50.00th=[ 523], 60.00th=[ 545], 00:19:23.229 | 70.00th=[ 562], 80.00th=[ 578], 90.00th=[ 611], 95.00th=[ 627], 00:19:23.229 | 99.00th=[ 693], 99.50th=[ 701], 99.90th=[ 2802], 99.95th=[ 2802], 00:19:23.229 | 99.99th=[ 2802] 00:19:23.229 bw ( KiB/s): min= 4096, max= 4096, per=50.39%, avg=4096.00, stdev= 0.00, samples=1 00:19:23.229 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:23.229 lat (usec) : 100=0.19%, 250=3.58%, 500=38.49%, 750=53.96%, 1000=0.19% 00:19:23.229 lat (msec) : 4=0.19%, 50=3.40% 00:19:23.229 cpu : usr=0.49%, sys=1.57%, ctx=533, majf=0, minf=1 00:19:23.229 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.229 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.229 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.229 job2: (groupid=0, jobs=1): err= 0: pid=1125804: Mon Apr 15 22:46:07 2024 00:19:23.229 read: IOPS=15, BW=63.1KiB/s (64.6kB/s)(64.0KiB/1015msec) 00:19:23.229 slat (nsec): min=9547, max=27039, avg=25439.12, stdev=4249.30 00:19:23.229 clat (usec): min=1174, max=42049, avg=39355.56, stdev=10185.10 00:19:23.229 lat (usec): min=1183, max=42076, avg=39381.00, stdev=10189.33 00:19:23.229 clat percentiles (usec): 00:19:23.229 | 1.00th=[ 1172], 5.00th=[ 1172], 10.00th=[41157], 20.00th=[41681], 00:19:23.229 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:23.229 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:23.229 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:23.229 | 99.99th=[42206] 00:19:23.229 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:19:23.229 slat (nsec): min=9187, max=62051, avg=30646.76, stdev=10170.13 00:19:23.229 clat (usec): min=429, max=990, avg=711.54, stdev=102.24 00:19:23.229 lat (usec): min=439, max=1025, avg=742.18, stdev=106.24 00:19:23.229 clat percentiles (usec): 00:19:23.229 | 1.00th=[ 457], 5.00th=[ 545], 10.00th=[ 570], 20.00th=[ 627], 00:19:23.229 | 30.00th=[ 660], 40.00th=[ 685], 50.00th=[ 709], 60.00th=[ 742], 00:19:23.229 | 70.00th=[ 775], 80.00th=[ 807], 90.00th=[ 840], 95.00th=[ 873], 00:19:23.229 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 988], 99.95th=[ 988], 00:19:23.229 | 99.99th=[ 988] 00:19:23.229 bw ( KiB/s): min= 4096, max= 4096, per=50.39%, avg=4096.00, stdev= 0.00, samples=1 00:19:23.229 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:23.229 lat (usec) : 500=2.65%, 750=58.52%, 1000=35.80% 00:19:23.229 lat (msec) : 2=0.19%, 50=2.84% 00:19:23.229 cpu : usr=1.28%, sys=1.68%, ctx=530, majf=0, minf=1 00:19:23.229 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.229 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.229 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.229 job3: (groupid=0, jobs=1): err= 0: pid=1125810: Mon Apr 15 22:46:07 2024 00:19:23.229 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:19:23.229 slat (nsec): min=7508, max=58237, avg=25808.63, stdev=3584.64 00:19:23.229 clat (usec): min=822, max=1284, avg=1054.15, stdev=60.61 00:19:23.229 lat (usec): min=848, max=1309, avg=1079.96, stdev=60.65 00:19:23.229 clat percentiles (usec): 00:19:23.229 | 1.00th=[ 889], 5.00th=[ 947], 10.00th=[ 971], 20.00th=[ 1012], 00:19:23.229 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1057], 60.00th=[ 1074], 00:19:23.229 | 70.00th=[ 1090], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1139], 00:19:23.230 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1287], 99.95th=[ 1287], 00:19:23.230 | 99.99th=[ 1287] 00:19:23.230 write: IOPS=540, BW=2162KiB/s (2214kB/s)(2164KiB/1001msec); 0 zone resets 00:19:23.230 slat (usec): min=9, max=36667, avg=109.06, stdev=1583.65 00:19:23.230 clat (usec): min=336, max=1035, avg=702.00, stdev=110.04 00:19:23.230 lat (usec): min=368, max=37505, avg=811.06, stdev=1593.56 00:19:23.230 clat percentiles (usec): 00:19:23.230 | 1.00th=[ 429], 5.00th=[ 498], 10.00th=[ 562], 20.00th=[ 611], 00:19:23.230 | 30.00th=[ 652], 40.00th=[ 685], 50.00th=[ 709], 60.00th=[ 734], 00:19:23.230 | 70.00th=[ 758], 80.00th=[ 791], 90.00th=[ 840], 95.00th=[ 865], 00:19:23.230 | 99.00th=[ 930], 99.50th=[ 938], 99.90th=[ 1037], 99.95th=[ 1037], 00:19:23.230 | 99.99th=[ 1037] 00:19:23.230 bw ( KiB/s): min= 4096, max= 4096, per=50.39%, avg=4096.00, stdev= 0.00, samples=1 00:19:23.230 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:23.230 lat (usec) : 500=2.66%, 750=31.34%, 1000=25.83% 00:19:23.230 lat (msec) : 2=40.17% 00:19:23.230 cpu : usr=1.30%, sys=3.50%, ctx=1059, majf=0, minf=2 00:19:23.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.230 issued rwts: total=512,541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.230 00:19:23.230 Run status group 0 (all jobs): 00:19:23.230 READ: bw=2200KiB/s (2252kB/s), 62.9KiB/s-2046KiB/s (64.4kB/s-2095kB/s), io=2248KiB (2302kB), run=1001-1022msec 00:19:23.230 WRITE: bw=8129KiB/s (8324kB/s), 2004KiB/s-2162KiB/s (2052kB/s-2214kB/s), io=8308KiB (8507kB), run=1001-1022msec 00:19:23.230 00:19:23.230 Disk stats (read/write): 00:19:23.230 nvme0n1: ios=61/512, merge=0/0, ticks=463/346, in_queue=809, util=86.07% 00:19:23.230 nvme0n2: ios=35/512, merge=0/0, ticks=1423/250, in_queue=1673, util=87.84% 00:19:23.230 nvme0n3: ios=33/512, merge=0/0, ticks=1301/309, in_queue=1610, util=91.85% 00:19:23.230 nvme0n4: ios=431/512, merge=0/0, ticks=805/341, in_queue=1146, util=97.22% 00:19:23.230 22:46:07 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:23.230 [global] 00:19:23.230 thread=1 00:19:23.230 invalidate=1 00:19:23.230 rw=randwrite 00:19:23.230 time_based=1 00:19:23.230 runtime=1 00:19:23.230 ioengine=libaio 00:19:23.230 direct=1 00:19:23.230 bs=4096 00:19:23.230 iodepth=1 00:19:23.230 norandommap=0 00:19:23.230 numjobs=1 00:19:23.230 00:19:23.230 verify_dump=1 00:19:23.230 verify_backlog=512 00:19:23.230 verify_state_save=0 00:19:23.230 do_verify=1 00:19:23.230 verify=crc32c-intel 00:19:23.230 [job0] 00:19:23.230 filename=/dev/nvme0n1 00:19:23.230 [job1] 00:19:23.230 filename=/dev/nvme0n2 00:19:23.230 [job2] 00:19:23.230 filename=/dev/nvme0n3 00:19:23.230 [job3] 00:19:23.230 filename=/dev/nvme0n4 00:19:23.230 Could not set queue depth (nvme0n1) 00:19:23.230 Could not set queue depth (nvme0n2) 00:19:23.230 Could not set queue depth (nvme0n3) 00:19:23.230 Could not set queue depth (nvme0n4) 00:19:23.503 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:23.503 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:23.503 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:23.503 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:23.503 fio-3.35 00:19:23.503 Starting 4 threads 00:19:24.893 00:19:24.893 job0: (groupid=0, jobs=1): err= 0: pid=1126290: Mon Apr 15 22:46:09 2024 00:19:24.893 read: IOPS=526, BW=2106KiB/s (2156kB/s)(2108KiB/1001msec) 00:19:24.893 slat (nsec): min=6438, max=60957, avg=24679.72, stdev=6019.92 00:19:24.893 clat (usec): min=463, max=1097, avg=810.45, stdev=93.12 00:19:24.893 lat (usec): min=488, max=1122, avg=835.13, stdev=94.25 00:19:24.893 clat percentiles (usec): 00:19:24.893 | 1.00th=[ 578], 5.00th=[ 644], 10.00th=[ 693], 20.00th=[ 734], 00:19:24.893 | 30.00th=[ 766], 40.00th=[ 791], 50.00th=[ 824], 60.00th=[ 848], 00:19:24.893 | 70.00th=[ 865], 80.00th=[ 881], 90.00th=[ 922], 95.00th=[ 947], 00:19:24.893 | 99.00th=[ 1004], 99.50th=[ 1057], 99.90th=[ 1090], 99.95th=[ 1090], 00:19:24.893 | 99.99th=[ 1090] 00:19:24.893 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:24.893 slat (nsec): min=8421, max=65937, avg=28883.72, stdev=8547.36 00:19:24.893 clat (usec): min=121, max=861, avg=506.65, stdev=141.76 00:19:24.893 lat (usec): min=134, max=892, avg=535.54, stdev=145.15 00:19:24.893 clat percentiles (usec): 00:19:24.893 | 1.00th=[ 141], 5.00th=[ 237], 10.00th=[ 306], 20.00th=[ 392], 00:19:24.893 | 30.00th=[ 449], 40.00th=[ 486], 50.00th=[ 519], 60.00th=[ 553], 00:19:24.893 | 70.00th=[ 586], 80.00th=[ 627], 90.00th=[ 676], 95.00th=[ 717], 00:19:24.893 | 99.00th=[ 791], 99.50th=[ 807], 99.90th=[ 857], 99.95th=[ 865], 00:19:24.893 | 99.99th=[ 865] 00:19:24.893 bw ( KiB/s): min= 4096, max= 4096, per=39.81%, avg=4096.00, stdev= 0.00, samples=1 00:19:24.893 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:24.893 lat (usec) : 250=4.38%, 500=25.21%, 750=43.33%, 1000=26.69% 00:19:24.893 lat (msec) : 2=0.39% 00:19:24.893 cpu : usr=3.30%, sys=5.50%, ctx=1551, majf=0, minf=1 00:19:24.893 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.893 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.893 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.893 job1: (groupid=0, jobs=1): err= 0: pid=1126298: Mon Apr 15 22:46:09 2024 00:19:24.893 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:19:24.893 slat (nsec): min=8018, max=43490, avg=25103.13, stdev=2741.77 00:19:24.893 clat (usec): min=846, max=1266, avg=1102.52, stdev=70.93 00:19:24.893 lat (usec): min=871, max=1290, avg=1127.62, stdev=70.83 00:19:24.893 clat percentiles (usec): 00:19:24.893 | 1.00th=[ 865], 5.00th=[ 963], 10.00th=[ 1012], 20.00th=[ 1057], 00:19:24.893 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1106], 60.00th=[ 1123], 00:19:24.893 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1172], 95.00th=[ 1205], 00:19:24.893 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[ 1270], 99.95th=[ 1270], 00:19:24.893 | 99.99th=[ 1270] 00:19:24.893 write: IOPS=572, BW=2290KiB/s (2345kB/s)(2292KiB/1001msec); 0 zone resets 00:19:24.893 slat (nsec): min=8876, max=50212, avg=27107.53, stdev=9028.09 00:19:24.893 clat (usec): min=347, max=1843, avg=695.89, stdev=121.60 00:19:24.893 lat (usec): min=377, max=1875, avg=723.00, stdev=125.31 00:19:24.893 clat percentiles (usec): 00:19:24.893 | 1.00th=[ 400], 5.00th=[ 469], 10.00th=[ 545], 20.00th=[ 603], 00:19:24.893 | 30.00th=[ 660], 40.00th=[ 685], 50.00th=[ 709], 60.00th=[ 734], 00:19:24.893 | 70.00th=[ 758], 80.00th=[ 791], 90.00th=[ 824], 95.00th=[ 857], 00:19:24.893 | 99.00th=[ 914], 99.50th=[ 963], 99.90th=[ 1844], 99.95th=[ 1844], 00:19:24.893 | 99.99th=[ 1844] 00:19:24.893 bw ( KiB/s): min= 4096, max= 4096, per=39.81%, avg=4096.00, stdev= 0.00, samples=1 00:19:24.893 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:24.893 lat (usec) : 500=4.06%, 750=30.51%, 1000=22.12% 00:19:24.893 lat (msec) : 2=43.32% 00:19:24.893 cpu : usr=0.90%, sys=3.70%, ctx=1085, majf=0, minf=1 00:19:24.893 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.893 issued rwts: total=512,573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.893 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.893 job2: (groupid=0, jobs=1): err= 0: pid=1126311: Mon Apr 15 22:46:09 2024 00:19:24.893 read: IOPS=15, BW=62.8KiB/s (64.3kB/s)(64.0KiB/1019msec) 00:19:24.893 slat (nsec): min=7244, max=26707, avg=24027.62, stdev=5969.23 00:19:24.893 clat (usec): min=1005, max=42033, avg=39385.39, stdev=10234.96 00:19:24.893 lat (usec): min=1015, max=42059, avg=39409.42, stdev=10238.63 00:19:24.893 clat percentiles (usec): 00:19:24.893 | 1.00th=[ 1004], 5.00th=[ 1004], 10.00th=[41681], 20.00th=[41681], 00:19:24.893 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:24.893 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:24.893 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:24.893 | 99.99th=[42206] 00:19:24.893 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:19:24.893 slat (nsec): min=8728, max=62322, avg=28936.36, stdev=9525.31 00:19:24.893 clat (usec): min=328, max=1063, avg=722.13, stdev=116.75 00:19:24.893 lat (usec): min=339, max=1095, avg=751.06, stdev=121.36 00:19:24.893 clat percentiles (usec): 00:19:24.893 | 1.00th=[ 453], 5.00th=[ 515], 10.00th=[ 570], 20.00th=[ 635], 00:19:24.893 | 30.00th=[ 668], 40.00th=[ 693], 50.00th=[ 717], 60.00th=[ 758], 00:19:24.893 | 70.00th=[ 791], 80.00th=[ 824], 90.00th=[ 873], 95.00th=[ 906], 00:19:24.893 | 99.00th=[ 947], 99.50th=[ 1004], 99.90th=[ 1057], 99.95th=[ 1057], 00:19:24.893 | 99.99th=[ 1057] 00:19:24.893 bw ( KiB/s): min= 4096, max= 4096, per=39.81%, avg=4096.00, stdev= 0.00, samples=1 00:19:24.893 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:24.893 lat (usec) : 500=4.55%, 750=52.27%, 1000=39.58% 00:19:24.893 lat (msec) : 2=0.76%, 50=2.84% 00:19:24.893 cpu : usr=1.47%, sys=1.38%, ctx=528, majf=0, minf=2 00:19:24.893 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.893 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.893 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.893 job3: (groupid=0, jobs=1): err= 0: pid=1126316: Mon Apr 15 22:46:09 2024 00:19:24.893 read: IOPS=16, BW=67.2KiB/s (68.8kB/s)(68.0KiB/1012msec) 00:19:24.893 slat (nsec): min=26096, max=26813, avg=26495.29, stdev=213.34 00:19:24.893 clat (usec): min=980, max=42068, avg=39471.07, stdev=9921.18 00:19:24.893 lat (usec): min=1007, max=42094, avg=39497.57, stdev=9921.14 00:19:24.893 clat percentiles (usec): 00:19:24.893 | 1.00th=[ 979], 5.00th=[ 979], 10.00th=[41157], 20.00th=[41681], 00:19:24.893 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:19:24.893 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:24.893 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:24.893 | 99.99th=[42206] 00:19:24.894 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:19:24.894 slat (nsec): min=9103, max=53398, avg=29832.59, stdev=9649.43 00:19:24.894 clat (usec): min=279, max=963, avg=627.01, stdev=136.96 00:19:24.894 lat (usec): min=289, max=997, avg=656.84, stdev=139.64 00:19:24.894 clat percentiles (usec): 00:19:24.894 | 1.00th=[ 338], 5.00th=[ 429], 10.00th=[ 453], 20.00th=[ 510], 00:19:24.894 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 668], 00:19:24.894 | 70.00th=[ 717], 80.00th=[ 758], 90.00th=[ 824], 95.00th=[ 848], 00:19:24.894 | 99.00th=[ 906], 99.50th=[ 930], 99.90th=[ 963], 99.95th=[ 963], 00:19:24.894 | 99.99th=[ 963] 00:19:24.894 bw ( KiB/s): min= 4096, max= 4096, per=39.81%, avg=4096.00, stdev= 0.00, samples=1 00:19:24.894 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:24.894 lat (usec) : 500=17.58%, 750=56.52%, 1000=22.87% 00:19:24.894 lat (msec) : 50=3.02% 00:19:24.894 cpu : usr=1.09%, sys=1.98%, ctx=531, majf=0, minf=1 00:19:24.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.894 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.894 00:19:24.894 Run status group 0 (all jobs): 00:19:24.894 READ: bw=4208KiB/s (4309kB/s), 62.8KiB/s-2106KiB/s (64.3kB/s-2156kB/s), io=4288KiB (4391kB), run=1001-1019msec 00:19:24.894 WRITE: bw=10.0MiB/s (10.5MB/s), 2010KiB/s-4092KiB/s (2058kB/s-4190kB/s), io=10.2MiB (10.7MB), run=1001-1019msec 00:19:24.894 00:19:24.894 Disk stats (read/write): 00:19:24.894 nvme0n1: ios=551/726, merge=0/0, ticks=550/281, in_queue=831, util=97.09% 00:19:24.894 nvme0n2: ios=454/512, merge=0/0, ticks=494/336, in_queue=830, util=89.38% 00:19:24.894 nvme0n3: ios=56/512, merge=0/0, ticks=471/294, in_queue=765, util=90.58% 00:19:24.894 nvme0n4: ios=50/512, merge=0/0, ticks=1466/269, in_queue=1735, util=99.04% 00:19:24.894 22:46:09 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:24.894 [global] 00:19:24.894 thread=1 00:19:24.894 invalidate=1 00:19:24.894 rw=write 00:19:24.894 time_based=1 00:19:24.894 runtime=1 00:19:24.894 ioengine=libaio 00:19:24.894 direct=1 00:19:24.894 bs=4096 00:19:24.894 iodepth=128 00:19:24.894 norandommap=0 00:19:24.894 numjobs=1 00:19:24.894 00:19:24.894 verify_dump=1 00:19:24.894 verify_backlog=512 00:19:24.894 verify_state_save=0 00:19:24.894 do_verify=1 00:19:24.894 verify=crc32c-intel 00:19:24.894 [job0] 00:19:24.894 filename=/dev/nvme0n1 00:19:24.894 [job1] 00:19:24.894 filename=/dev/nvme0n2 00:19:24.894 [job2] 00:19:24.894 filename=/dev/nvme0n3 00:19:24.894 [job3] 00:19:24.894 filename=/dev/nvme0n4 00:19:24.894 Could not set queue depth (nvme0n1) 00:19:24.894 Could not set queue depth (nvme0n2) 00:19:24.894 Could not set queue depth (nvme0n3) 00:19:24.894 Could not set queue depth (nvme0n4) 00:19:25.166 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:25.166 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:25.166 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:25.166 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:25.166 fio-3.35 00:19:25.166 Starting 4 threads 00:19:26.608 00:19:26.608 job0: (groupid=0, jobs=1): err= 0: pid=1126815: Mon Apr 15 22:46:11 2024 00:19:26.608 read: IOPS=8151, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1005msec) 00:19:26.608 slat (nsec): min=908, max=7150.7k, avg=60868.43, stdev=428037.35 00:19:26.608 clat (usec): min=1964, max=18171, avg=7950.45, stdev=2014.19 00:19:26.608 lat (usec): min=2417, max=18174, avg=8011.32, stdev=2030.70 00:19:26.608 clat percentiles (usec): 00:19:26.608 | 1.00th=[ 4178], 5.00th=[ 5145], 10.00th=[ 6063], 20.00th=[ 6652], 00:19:26.608 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7504], 60.00th=[ 7832], 00:19:26.608 | 70.00th=[ 8455], 80.00th=[ 9372], 90.00th=[10683], 95.00th=[11863], 00:19:26.608 | 99.00th=[14353], 99.50th=[16057], 99.90th=[17695], 99.95th=[18220], 00:19:26.608 | 99.99th=[18220] 00:19:26.608 write: IOPS=8345, BW=32.6MiB/s (34.2MB/s)(32.8MiB/1005msec); 0 zone resets 00:19:26.608 slat (nsec): min=1568, max=8317.1k, avg=55725.37, stdev=371128.06 00:19:26.608 clat (usec): min=1147, max=18168, avg=7434.22, stdev=2603.83 00:19:26.608 lat (usec): min=1159, max=18170, avg=7489.95, stdev=2605.75 00:19:26.609 clat percentiles (usec): 00:19:26.609 | 1.00th=[ 2737], 5.00th=[ 3818], 10.00th=[ 4490], 20.00th=[ 5997], 00:19:26.609 | 30.00th=[ 6652], 40.00th=[ 6980], 50.00th=[ 7242], 60.00th=[ 7504], 00:19:26.609 | 70.00th=[ 7635], 80.00th=[ 7898], 90.00th=[10814], 95.00th=[13566], 00:19:26.609 | 99.00th=[16057], 99.50th=[16188], 99.90th=[17695], 99.95th=[17695], 00:19:26.609 | 99.99th=[18220] 00:19:26.609 bw ( KiB/s): min=32768, max=33376, per=31.17%, avg=33072.00, stdev=429.92, samples=2 00:19:26.609 iops : min= 8192, max= 8344, avg=8268.00, stdev=107.48, samples=2 00:19:26.609 lat (msec) : 2=0.13%, 4=3.54%, 10=83.60%, 20=12.73% 00:19:26.609 cpu : usr=5.08%, sys=7.27%, ctx=611, majf=0, minf=1 00:19:26.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:26.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:26.609 issued rwts: total=8192,8387,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:26.609 job1: (groupid=0, jobs=1): err= 0: pid=1126816: Mon Apr 15 22:46:11 2024 00:19:26.609 read: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec) 00:19:26.609 slat (nsec): min=966, max=9662.4k, avg=88059.75, stdev=624958.29 00:19:26.609 clat (usec): min=4080, max=30609, avg=11707.43, stdev=3509.98 00:19:26.609 lat (usec): min=4085, max=30641, avg=11795.48, stdev=3534.22 00:19:26.609 clat percentiles (usec): 00:19:26.609 | 1.00th=[ 5669], 5.00th=[ 6980], 10.00th=[ 7832], 20.00th=[ 9241], 00:19:26.609 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10814], 60.00th=[11731], 00:19:26.609 | 70.00th=[12649], 80.00th=[14091], 90.00th=[16712], 95.00th=[17957], 00:19:26.609 | 99.00th=[22414], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:19:26.609 | 99.99th=[30540] 00:19:26.609 write: IOPS=6081, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1010msec); 0 zone resets 00:19:26.609 slat (nsec): min=1683, max=10633k, avg=76764.76, stdev=509802.03 00:19:26.609 clat (usec): min=1180, max=54542, avg=10111.61, stdev=6108.42 00:19:26.609 lat (usec): min=1190, max=54548, avg=10188.37, stdev=6130.48 00:19:26.609 clat percentiles (usec): 00:19:26.609 | 1.00th=[ 3392], 5.00th=[ 4686], 10.00th=[ 5866], 20.00th=[ 7570], 00:19:26.609 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[10290], 00:19:26.609 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11731], 95.00th=[14746], 00:19:26.609 | 99.00th=[47973], 99.50th=[52167], 99.90th=[53740], 99.95th=[53740], 00:19:26.609 | 99.99th=[54789] 00:19:26.609 bw ( KiB/s): min=23856, max=24264, per=22.68%, avg=24060.00, stdev=288.50, samples=2 00:19:26.609 iops : min= 5964, max= 6066, avg=6015.00, stdev=72.12, samples=2 00:19:26.609 lat (msec) : 2=0.02%, 4=1.54%, 10=43.71%, 20=52.19%, 50=2.11% 00:19:26.609 lat (msec) : 100=0.43% 00:19:26.609 cpu : usr=5.05%, sys=6.14%, ctx=487, majf=0, minf=1 00:19:26.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:26.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:26.609 issued rwts: total=5632,6142,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:26.609 job2: (groupid=0, jobs=1): err= 0: pid=1126828: Mon Apr 15 22:46:11 2024 00:19:26.609 read: IOPS=6845, BW=26.7MiB/s (28.0MB/s)(27.0MiB/1008msec) 00:19:26.609 slat (nsec): min=968, max=17133k, avg=72047.68, stdev=605893.83 00:19:26.609 clat (usec): min=2124, max=40325, avg=9794.56, stdev=3937.59 00:19:26.609 lat (usec): min=2142, max=43678, avg=9866.61, stdev=3980.15 00:19:26.609 clat percentiles (usec): 00:19:26.609 | 1.00th=[ 4178], 5.00th=[ 5997], 10.00th=[ 6390], 20.00th=[ 7504], 00:19:26.609 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8979], 00:19:26.609 | 70.00th=[10159], 80.00th=[11731], 90.00th=[13960], 95.00th=[18220], 00:19:26.609 | 99.00th=[24773], 99.50th=[25035], 99.90th=[40109], 99.95th=[40109], 00:19:26.609 | 99.99th=[40109] 00:19:26.609 write: IOPS=7111, BW=27.8MiB/s (29.1MB/s)(28.0MiB/1008msec); 0 zone resets 00:19:26.609 slat (nsec): min=1600, max=10016k, avg=53354.65, stdev=355201.18 00:19:26.609 clat (usec): min=1019, max=27073, avg=8416.16, stdev=2803.36 00:19:26.609 lat (usec): min=1052, max=27082, avg=8469.52, stdev=2805.63 00:19:26.609 clat percentiles (usec): 00:19:26.609 | 1.00th=[ 2802], 5.00th=[ 4228], 10.00th=[ 5211], 20.00th=[ 6718], 00:19:26.609 | 30.00th=[ 7701], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 8717], 00:19:26.609 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[10945], 95.00th=[12780], 00:19:26.609 | 99.00th=[19006], 99.50th=[23987], 99.90th=[27132], 99.95th=[27132], 00:19:26.609 | 99.99th=[27132] 00:19:26.609 bw ( KiB/s): min=27664, max=29680, per=27.02%, avg=28672.00, stdev=1425.53, samples=2 00:19:26.609 iops : min= 6916, max= 7420, avg=7168.00, stdev=356.38, samples=2 00:19:26.609 lat (msec) : 2=0.06%, 4=2.34%, 10=75.48%, 20=20.39%, 50=1.73% 00:19:26.609 cpu : usr=5.06%, sys=7.15%, ctx=596, majf=0, minf=1 00:19:26.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:26.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:26.609 issued rwts: total=6900,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:26.609 job3: (groupid=0, jobs=1): err= 0: pid=1126831: Mon Apr 15 22:46:11 2024 00:19:26.609 read: IOPS=4595, BW=18.0MiB/s (18.8MB/s)(18.1MiB/1011msec) 00:19:26.609 slat (nsec): min=944, max=13516k, avg=93888.47, stdev=707997.86 00:19:26.609 clat (usec): min=2307, max=48474, avg=13044.69, stdev=5436.64 00:19:26.609 lat (usec): min=2329, max=48481, avg=13138.58, stdev=5463.87 00:19:26.609 clat percentiles (usec): 00:19:26.609 | 1.00th=[ 4883], 5.00th=[ 5932], 10.00th=[ 7439], 20.00th=[ 9765], 00:19:26.609 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11994], 60.00th=[12780], 00:19:26.609 | 70.00th=[14091], 80.00th=[16057], 90.00th=[19268], 95.00th=[21890], 00:19:26.609 | 99.00th=[31589], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:19:26.609 | 99.99th=[48497] 00:19:26.609 write: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec); 0 zone resets 00:19:26.609 slat (nsec): min=1608, max=13273k, avg=99001.47, stdev=735406.75 00:19:26.609 clat (usec): min=1099, max=75599, avg=13214.72, stdev=9380.41 00:19:26.609 lat (usec): min=1110, max=75607, avg=13313.72, stdev=9432.50 00:19:26.609 clat percentiles (usec): 00:19:26.609 | 1.00th=[ 3228], 5.00th=[ 5735], 10.00th=[ 5997], 20.00th=[ 7898], 00:19:26.609 | 30.00th=[ 8848], 40.00th=[ 9896], 50.00th=[11469], 60.00th=[12256], 00:19:26.609 | 70.00th=[12780], 80.00th=[16319], 90.00th=[19792], 95.00th=[26870], 00:19:26.609 | 99.00th=[66847], 99.50th=[68682], 99.90th=[76022], 99.95th=[76022], 00:19:26.609 | 99.99th=[76022] 00:19:26.609 bw ( KiB/s): min=18032, max=22216, per=18.97%, avg=20124.00, stdev=2958.53, samples=2 00:19:26.609 iops : min= 4508, max= 5554, avg=5031.00, stdev=739.63, samples=2 00:19:26.609 lat (msec) : 2=0.30%, 4=0.74%, 10=32.30%, 20=58.52%, 50=7.27% 00:19:26.609 lat (msec) : 100=0.88% 00:19:26.609 cpu : usr=4.46%, sys=4.36%, ctx=342, majf=0, minf=1 00:19:26.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:26.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:26.609 issued rwts: total=4646,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:26.609 00:19:26.609 Run status group 0 (all jobs): 00:19:26.609 READ: bw=98.0MiB/s (103MB/s), 18.0MiB/s-31.8MiB/s (18.8MB/s-33.4MB/s), io=99.1MiB (104MB), run=1005-1011msec 00:19:26.609 WRITE: bw=104MiB/s (109MB/s), 19.8MiB/s-32.6MiB/s (20.7MB/s-34.2MB/s), io=105MiB (110MB), run=1005-1011msec 00:19:26.609 00:19:26.609 Disk stats (read/write): 00:19:26.609 nvme0n1: ios=6706/7023, merge=0/0, ticks=50840/50779, in_queue=101619, util=86.47% 00:19:26.609 nvme0n2: ios=4646/5085, merge=0/0, ticks=53045/49181, in_queue=102226, util=99.59% 00:19:26.609 nvme0n3: ios=5678/5903, merge=0/0, ticks=52830/46550, in_queue=99380, util=96.84% 00:19:26.609 nvme0n4: ios=3746/4096, merge=0/0, ticks=40453/43546, in_queue=83999, util=95.84% 00:19:26.609 22:46:11 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:26.609 [global] 00:19:26.609 thread=1 00:19:26.609 invalidate=1 00:19:26.609 rw=randwrite 00:19:26.609 time_based=1 00:19:26.609 runtime=1 00:19:26.609 ioengine=libaio 00:19:26.609 direct=1 00:19:26.609 bs=4096 00:19:26.609 iodepth=128 00:19:26.609 norandommap=0 00:19:26.609 numjobs=1 00:19:26.609 00:19:26.609 verify_dump=1 00:19:26.609 verify_backlog=512 00:19:26.609 verify_state_save=0 00:19:26.609 do_verify=1 00:19:26.609 verify=crc32c-intel 00:19:26.609 [job0] 00:19:26.609 filename=/dev/nvme0n1 00:19:26.609 [job1] 00:19:26.609 filename=/dev/nvme0n2 00:19:26.609 [job2] 00:19:26.609 filename=/dev/nvme0n3 00:19:26.609 [job3] 00:19:26.609 filename=/dev/nvme0n4 00:19:26.609 Could not set queue depth (nvme0n1) 00:19:26.609 Could not set queue depth (nvme0n2) 00:19:26.609 Could not set queue depth (nvme0n3) 00:19:26.609 Could not set queue depth (nvme0n4) 00:19:26.873 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.873 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.873 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.873 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.873 fio-3.35 00:19:26.873 Starting 4 threads 00:19:28.288 00:19:28.288 job0: (groupid=0, jobs=1): err= 0: pid=1127346: Mon Apr 15 22:46:12 2024 00:19:28.288 read: IOPS=8104, BW=31.7MiB/s (33.2MB/s)(31.7MiB/1002msec) 00:19:28.288 slat (nsec): min=889, max=10162k, avg=64099.06, stdev=421168.69 00:19:28.288 clat (usec): min=839, max=29436, avg=8082.38, stdev=2756.92 00:19:28.288 lat (usec): min=4166, max=29466, avg=8146.47, stdev=2792.65 00:19:28.288 clat percentiles (usec): 00:19:28.288 | 1.00th=[ 5342], 5.00th=[ 5932], 10.00th=[ 6194], 20.00th=[ 6652], 00:19:28.288 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7373], 60.00th=[ 7570], 00:19:28.288 | 70.00th=[ 7898], 80.00th=[ 8455], 90.00th=[10945], 95.00th=[13042], 00:19:28.288 | 99.00th=[21365], 99.50th=[23462], 99.90th=[24511], 99.95th=[24511], 00:19:28.288 | 99.99th=[29492] 00:19:28.288 write: IOPS=8175, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1002msec); 0 zone resets 00:19:28.288 slat (nsec): min=1527, max=4057.9k, avg=54830.41, stdev=331542.07 00:19:28.288 clat (usec): min=3197, max=24430, avg=7471.95, stdev=1668.38 00:19:28.288 lat (usec): min=3199, max=24433, avg=7526.78, stdev=1693.29 00:19:28.288 clat percentiles (usec): 00:19:28.288 | 1.00th=[ 4359], 5.00th=[ 5735], 10.00th=[ 6128], 20.00th=[ 6521], 00:19:28.288 | 30.00th=[ 6783], 40.00th=[ 7046], 50.00th=[ 7373], 60.00th=[ 7570], 00:19:28.288 | 70.00th=[ 7767], 80.00th=[ 8029], 90.00th=[ 8455], 95.00th=[ 9896], 00:19:28.288 | 99.00th=[15139], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:19:28.288 | 99.99th=[24511] 00:19:28.288 bw ( KiB/s): min=28672, max=36864, per=33.61%, avg=32768.00, stdev=5792.62, samples=2 00:19:28.288 iops : min= 7168, max= 9216, avg=8192.00, stdev=1448.15, samples=2 00:19:28.288 lat (usec) : 1000=0.01% 00:19:28.288 lat (msec) : 4=0.23%, 10=90.52%, 20=8.45%, 50=0.80% 00:19:28.289 cpu : usr=5.39%, sys=5.19%, ctx=713, majf=0, minf=1 00:19:28.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:28.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.289 issued rwts: total=8121,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.289 job1: (groupid=0, jobs=1): err= 0: pid=1127347: Mon Apr 15 22:46:12 2024 00:19:28.289 read: IOPS=6436, BW=25.1MiB/s (26.4MB/s)(25.2MiB/1002msec) 00:19:28.289 slat (nsec): min=961, max=8794.5k, avg=78180.39, stdev=573250.55 00:19:28.289 clat (usec): min=928, max=21722, avg=10437.51, stdev=3165.20 00:19:28.289 lat (usec): min=2626, max=21752, avg=10515.69, stdev=3186.34 00:19:28.289 clat percentiles (usec): 00:19:28.289 | 1.00th=[ 3359], 5.00th=[ 6390], 10.00th=[ 6980], 20.00th=[ 8029], 00:19:28.289 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9765], 60.00th=[10683], 00:19:28.289 | 70.00th=[11731], 80.00th=[13042], 90.00th=[15139], 95.00th=[16057], 00:19:28.289 | 99.00th=[19530], 99.50th=[20579], 99.90th=[21365], 99.95th=[21365], 00:19:28.289 | 99.99th=[21627] 00:19:28.289 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:19:28.289 slat (nsec): min=1560, max=10500k, avg=66673.32, stdev=518930.33 00:19:28.289 clat (usec): min=947, max=20177, avg=8987.50, stdev=2966.13 00:19:28.289 lat (usec): min=979, max=20184, avg=9054.17, stdev=2983.00 00:19:28.289 clat percentiles (usec): 00:19:28.289 | 1.00th=[ 2737], 5.00th=[ 4621], 10.00th=[ 5800], 20.00th=[ 6718], 00:19:28.289 | 30.00th=[ 7373], 40.00th=[ 7963], 50.00th=[ 8717], 60.00th=[ 9241], 00:19:28.289 | 70.00th=[ 9634], 80.00th=[10945], 90.00th=[13698], 95.00th=[14746], 00:19:28.289 | 99.00th=[16909], 99.50th=[20055], 99.90th=[20055], 99.95th=[20055], 00:19:28.289 | 99.99th=[20055] 00:19:28.289 bw ( KiB/s): min=26592, max=26656, per=27.31%, avg=26624.00, stdev=45.25, samples=2 00:19:28.289 iops : min= 6648, max= 6664, avg=6656.00, stdev=11.31, samples=2 00:19:28.289 lat (usec) : 1000=0.03% 00:19:28.289 lat (msec) : 2=0.03%, 4=2.05%, 10=61.44%, 20=35.74%, 50=0.70% 00:19:28.289 cpu : usr=5.09%, sys=6.89%, ctx=377, majf=0, minf=1 00:19:28.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:28.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.289 issued rwts: total=6449,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.289 job2: (groupid=0, jobs=1): err= 0: pid=1127348: Mon Apr 15 22:46:12 2024 00:19:28.289 read: IOPS=5315, BW=20.8MiB/s (21.8MB/s)(20.8MiB/1003msec) 00:19:28.289 slat (nsec): min=947, max=15166k, avg=95348.11, stdev=731377.81 00:19:28.289 clat (usec): min=1322, max=38789, avg=12432.43, stdev=5443.64 00:19:28.289 lat (usec): min=2551, max=38795, avg=12527.77, stdev=5490.60 00:19:28.289 clat percentiles (usec): 00:19:28.289 | 1.00th=[ 4228], 5.00th=[ 6063], 10.00th=[ 6783], 20.00th=[ 7439], 00:19:28.289 | 30.00th=[ 8094], 40.00th=[10290], 50.00th=[11731], 60.00th=[12911], 00:19:28.289 | 70.00th=[14877], 80.00th=[16581], 90.00th=[19268], 95.00th=[20841], 00:19:28.289 | 99.00th=[29754], 99.50th=[33424], 99.90th=[38536], 99.95th=[38536], 00:19:28.289 | 99.99th=[38536] 00:19:28.289 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:19:28.289 slat (nsec): min=1585, max=45992k, avg=75296.83, stdev=808710.30 00:19:28.289 clat (usec): min=711, max=54034, avg=10734.10, stdev=7241.30 00:19:28.289 lat (usec): min=720, max=62223, avg=10809.40, stdev=7280.83 00:19:28.289 clat percentiles (usec): 00:19:28.289 | 1.00th=[ 2900], 5.00th=[ 4490], 10.00th=[ 5669], 20.00th=[ 7046], 00:19:28.289 | 30.00th=[ 7570], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[10945], 00:19:28.289 | 70.00th=[11863], 80.00th=[12780], 90.00th=[15533], 95.00th=[19006], 00:19:28.289 | 99.00th=[53740], 99.50th=[54264], 99.90th=[54264], 99.95th=[54264], 00:19:28.289 | 99.99th=[54264] 00:19:28.289 bw ( KiB/s): min=20480, max=24576, per=23.11%, avg=22528.00, stdev=2896.31, samples=2 00:19:28.289 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:19:28.289 lat (usec) : 750=0.03% 00:19:28.289 lat (msec) : 2=0.17%, 4=1.90%, 10=43.73%, 20=48.44%, 50=5.14% 00:19:28.289 lat (msec) : 100=0.60% 00:19:28.289 cpu : usr=3.89%, sys=5.19%, ctx=418, majf=0, minf=1 00:19:28.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:28.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.289 issued rwts: total=5331,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.289 job3: (groupid=0, jobs=1): err= 0: pid=1127349: Mon Apr 15 22:46:12 2024 00:19:28.289 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:19:28.289 slat (nsec): min=1325, max=18069k, avg=87930.64, stdev=807198.40 00:19:28.289 clat (usec): min=2060, max=33001, avg=13921.40, stdev=5880.76 00:19:28.289 lat (usec): min=2064, max=33005, avg=14009.33, stdev=5913.41 00:19:28.289 clat percentiles (usec): 00:19:28.289 | 1.00th=[ 3982], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 7963], 00:19:28.289 | 30.00th=[ 9896], 40.00th=[11600], 50.00th=[13042], 60.00th=[15401], 00:19:28.289 | 70.00th=[17957], 80.00th=[19268], 90.00th=[21365], 95.00th=[24249], 00:19:28.289 | 99.00th=[26870], 99.50th=[30016], 99.90th=[30016], 99.95th=[30016], 00:19:28.289 | 99.99th=[32900] 00:19:28.289 write: IOPS=4015, BW=15.7MiB/s (16.4MB/s)(15.8MiB/1006msec); 0 zone resets 00:19:28.289 slat (nsec): min=1517, max=10937k, avg=132660.30, stdev=789707.48 00:19:28.289 clat (usec): min=782, max=92845, avg=18900.94, stdev=21014.73 00:19:28.289 lat (usec): min=792, max=92852, avg=19033.60, stdev=21157.55 00:19:28.289 clat percentiles (usec): 00:19:28.289 | 1.00th=[ 1565], 5.00th=[ 3785], 10.00th=[ 5014], 20.00th=[ 6915], 00:19:28.289 | 30.00th=[ 8094], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[11600], 00:19:28.289 | 70.00th=[13042], 80.00th=[24249], 90.00th=[61080], 95.00th=[71828], 00:19:28.289 | 99.00th=[85459], 99.50th=[88605], 99.90th=[92799], 99.95th=[92799], 00:19:28.289 | 99.99th=[92799] 00:19:28.289 bw ( KiB/s): min= 8192, max=23104, per=16.05%, avg=15648.00, stdev=10544.38, samples=2 00:19:28.289 iops : min= 2048, max= 5776, avg=3912.00, stdev=2636.09, samples=2 00:19:28.289 lat (usec) : 1000=0.09% 00:19:28.289 lat (msec) : 2=0.63%, 4=3.32%, 10=36.07%, 20=41.36%, 50=11.82% 00:19:28.289 lat (msec) : 100=6.72% 00:19:28.289 cpu : usr=3.08%, sys=3.98%, ctx=376, majf=0, minf=1 00:19:28.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:28.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.289 issued rwts: total=3584,4040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.289 00:19:28.289 Run status group 0 (all jobs): 00:19:28.289 READ: bw=91.2MiB/s (95.6MB/s), 13.9MiB/s-31.7MiB/s (14.6MB/s-33.2MB/s), io=91.7MiB (96.2MB), run=1002-1006msec 00:19:28.289 WRITE: bw=95.2MiB/s (99.8MB/s), 15.7MiB/s-31.9MiB/s (16.4MB/s-33.5MB/s), io=95.8MiB (100MB), run=1002-1006msec 00:19:28.289 00:19:28.289 Disk stats (read/write): 00:19:28.289 nvme0n1: ios=6738/7168, merge=0/0, ticks=27215/23837, in_queue=51052, util=100.00% 00:19:28.289 nvme0n2: ios=5295/5632, merge=0/0, ticks=51828/48159, in_queue=99987, util=89.91% 00:19:28.289 nvme0n3: ios=4661/4796, merge=0/0, ticks=51011/38876, in_queue=89887, util=95.79% 00:19:28.289 nvme0n4: ios=2617/3052, merge=0/0, ticks=36253/53550, in_queue=89803, util=96.80% 00:19:28.289 22:46:12 -- target/fio.sh@55 -- # sync 00:19:28.289 22:46:12 -- target/fio.sh@59 -- # fio_pid=1127685 00:19:28.289 22:46:12 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:28.289 22:46:12 -- target/fio.sh@61 -- # sleep 3 00:19:28.289 [global] 00:19:28.289 thread=1 00:19:28.289 invalidate=1 00:19:28.289 rw=read 00:19:28.289 time_based=1 00:19:28.289 runtime=10 00:19:28.289 ioengine=libaio 00:19:28.289 direct=1 00:19:28.289 bs=4096 00:19:28.289 iodepth=1 00:19:28.289 norandommap=1 00:19:28.289 numjobs=1 00:19:28.289 00:19:28.289 [job0] 00:19:28.289 filename=/dev/nvme0n1 00:19:28.289 [job1] 00:19:28.289 filename=/dev/nvme0n2 00:19:28.289 [job2] 00:19:28.289 filename=/dev/nvme0n3 00:19:28.289 [job3] 00:19:28.289 filename=/dev/nvme0n4 00:19:28.289 Could not set queue depth (nvme0n1) 00:19:28.289 Could not set queue depth (nvme0n2) 00:19:28.289 Could not set queue depth (nvme0n3) 00:19:28.289 Could not set queue depth (nvme0n4) 00:19:28.559 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:28.559 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:28.559 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:28.559 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:28.559 fio-3.35 00:19:28.559 Starting 4 threads 00:19:31.097 22:46:15 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:31.098 22:46:15 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:31.359 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=253952, buflen=4096 00:19:31.359 fio: pid=1127883, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:31.359 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=266240, buflen=4096 00:19:31.359 fio: pid=1127882, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:31.359 22:46:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:31.359 22:46:16 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:31.619 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=1585152, buflen=4096 00:19:31.619 fio: pid=1127879, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:31.619 22:46:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:31.619 22:46:16 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:31.619 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=12673024, buflen=4096 00:19:31.619 fio: pid=1127880, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:31.619 22:46:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:31.619 22:46:16 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:31.619 00:19:31.619 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1127879: Mon Apr 15 22:46:16 2024 00:19:31.619 read: IOPS=134, BW=538KiB/s (551kB/s)(1548KiB/2877msec) 00:19:31.619 slat (usec): min=7, max=29574, avg=174.66, stdev=1719.91 00:19:31.619 clat (usec): min=710, max=42910, avg=7196.52, stdev=14456.87 00:19:31.619 lat (usec): min=750, max=42934, avg=7371.56, stdev=14495.64 00:19:31.619 clat percentiles (usec): 00:19:31.619 | 1.00th=[ 963], 5.00th=[ 1057], 10.00th=[ 1106], 20.00th=[ 1156], 00:19:31.619 | 30.00th=[ 1172], 40.00th=[ 1188], 50.00th=[ 1221], 60.00th=[ 1237], 00:19:31.619 | 70.00th=[ 1254], 80.00th=[ 1303], 90.00th=[42206], 95.00th=[42206], 00:19:31.619 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:19:31.619 | 99.99th=[42730] 00:19:31.619 bw ( KiB/s): min= 96, max= 1072, per=7.15%, avg=339.20, stdev=422.62, samples=5 00:19:31.619 iops : min= 24, max= 268, avg=84.80, stdev=105.66, samples=5 00:19:31.619 lat (usec) : 750=0.26%, 1000=1.29% 00:19:31.619 lat (msec) : 2=83.51%, 50=14.69% 00:19:31.619 cpu : usr=0.14%, sys=0.42%, ctx=393, majf=0, minf=1 00:19:31.619 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.619 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.619 issued rwts: total=388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.619 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.619 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1127880: Mon Apr 15 22:46:16 2024 00:19:31.619 read: IOPS=1016, BW=4064KiB/s (4162kB/s)(12.1MiB/3045msec) 00:19:31.619 slat (usec): min=6, max=23830, avg=44.62, stdev=597.62 00:19:31.619 clat (usec): min=408, max=41984, avg=926.13, stdev=1280.67 00:19:31.619 lat (usec): min=415, max=42008, avg=970.76, stdev=1417.23 00:19:31.619 clat percentiles (usec): 00:19:31.619 | 1.00th=[ 594], 5.00th=[ 644], 10.00th=[ 676], 20.00th=[ 709], 00:19:31.619 | 30.00th=[ 791], 40.00th=[ 840], 50.00th=[ 914], 60.00th=[ 963], 00:19:31.619 | 70.00th=[ 1012], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1090], 00:19:31.619 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1221], 99.95th=[41681], 00:19:31.619 | 99.99th=[42206] 00:19:31.619 bw ( KiB/s): min= 4336, max= 4384, per=91.84%, avg=4353.60, stdev=19.92, samples=5 00:19:31.619 iops : min= 1084, max= 1096, avg=1088.40, stdev= 4.98, samples=5 00:19:31.619 lat (usec) : 500=0.19%, 750=24.98%, 1000=43.26% 00:19:31.619 lat (msec) : 2=31.44%, 50=0.10% 00:19:31.619 cpu : usr=1.12%, sys=2.89%, ctx=3102, majf=0, minf=1 00:19:31.619 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.619 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.619 issued rwts: total=3095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.619 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.619 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1127882: Mon Apr 15 22:46:16 2024 00:19:31.619 read: IOPS=24, BW=95.9KiB/s (98.2kB/s)(260KiB/2710msec) 00:19:31.619 slat (nsec): min=23794, max=33960, avg=24275.79, stdev=1230.79 00:19:31.619 clat (usec): min=1083, max=42121, avg=41336.29, stdev=5071.13 00:19:31.619 lat (usec): min=1117, max=42145, avg=41360.56, stdev=5069.91 00:19:31.619 clat percentiles (usec): 00:19:31.619 | 1.00th=[ 1090], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:19:31.619 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:31.619 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:31.619 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:31.619 | 99.99th=[42206] 00:19:31.619 bw ( KiB/s): min= 96, max= 96, per=2.03%, avg=96.00, stdev= 0.00, samples=5 00:19:31.619 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:19:31.619 lat (msec) : 2=1.52%, 50=96.97% 00:19:31.619 cpu : usr=0.11%, sys=0.00%, ctx=66, majf=0, minf=1 00:19:31.619 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.619 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.619 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.619 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.619 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1127883: Mon Apr 15 22:46:16 2024 00:19:31.619 read: IOPS=24, BW=96.1KiB/s (98.4kB/s)(248KiB/2581msec) 00:19:31.619 slat (nsec): min=25881, max=39332, avg=26964.57, stdev=1982.15 00:19:31.619 clat (usec): min=827, max=45025, avg=41348.30, stdev=5247.88 00:19:31.619 lat (usec): min=866, max=45056, avg=41375.27, stdev=5246.34 00:19:31.619 clat percentiles (usec): 00:19:31.619 | 1.00th=[ 832], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:19:31.619 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:31.619 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:31.619 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:19:31.619 | 99.99th=[44827] 00:19:31.619 bw ( KiB/s): min= 96, max= 96, per=2.03%, avg=96.00, stdev= 0.00, samples=5 00:19:31.619 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:19:31.619 lat (usec) : 1000=1.59% 00:19:31.619 lat (msec) : 50=96.83% 00:19:31.619 cpu : usr=0.16%, sys=0.00%, ctx=63, majf=0, minf=2 00:19:31.619 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.619 complete : 0=1.6%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.619 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.619 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.619 00:19:31.619 Run status group 0 (all jobs): 00:19:31.619 READ: bw=4740KiB/s (4853kB/s), 95.9KiB/s-4064KiB/s (98.2kB/s-4162kB/s), io=14.1MiB (14.8MB), run=2581-3045msec 00:19:31.619 00:19:31.619 Disk stats (read/write): 00:19:31.619 nvme0n1: ios=371/0, merge=0/0, ticks=2755/0, in_queue=2755, util=92.89% 00:19:31.619 nvme0n2: ios=3069/0, merge=0/0, ticks=2580/0, in_queue=2580, util=94.59% 00:19:31.619 nvme0n3: ios=62/0, merge=0/0, ticks=2563/0, in_queue=2563, util=96.03% 00:19:31.619 nvme0n4: ios=56/0, merge=0/0, ticks=2314/0, in_queue=2314, util=96.06% 00:19:31.878 22:46:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:31.878 22:46:16 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:32.136 22:46:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:32.136 22:46:16 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:32.136 22:46:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:32.136 22:46:16 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:32.394 22:46:17 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:32.394 22:46:17 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:32.653 22:46:17 -- target/fio.sh@69 -- # fio_status=0 00:19:32.653 22:46:17 -- target/fio.sh@70 -- # wait 1127685 00:19:32.653 22:46:17 -- target/fio.sh@70 -- # fio_status=4 00:19:32.653 22:46:17 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:32.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:32.653 22:46:17 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:32.653 22:46:17 -- common/autotest_common.sh@1198 -- # local i=0 00:19:32.653 22:46:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:32.653 22:46:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:32.653 22:46:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:32.653 22:46:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:32.653 22:46:17 -- common/autotest_common.sh@1210 -- # return 0 00:19:32.653 22:46:17 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:32.653 22:46:17 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:32.653 nvmf hotplug test: fio failed as expected 00:19:32.653 22:46:17 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:32.912 22:46:17 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:32.912 22:46:17 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:32.912 22:46:17 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:32.912 22:46:17 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:32.912 22:46:17 -- target/fio.sh@91 -- # nvmftestfini 00:19:32.912 22:46:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:32.912 22:46:17 -- nvmf/common.sh@116 -- # sync 00:19:32.912 22:46:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:32.912 22:46:17 -- nvmf/common.sh@119 -- # set +e 00:19:32.912 22:46:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:32.912 22:46:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:32.912 rmmod nvme_tcp 00:19:32.912 rmmod nvme_fabrics 00:19:32.912 rmmod nvme_keyring 00:19:32.912 22:46:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:32.912 22:46:17 -- nvmf/common.sh@123 -- # set -e 00:19:32.912 22:46:17 -- nvmf/common.sh@124 -- # return 0 00:19:32.912 22:46:17 -- nvmf/common.sh@477 -- # '[' -n 1124144 ']' 00:19:32.912 22:46:17 -- nvmf/common.sh@478 -- # killprocess 1124144 00:19:32.912 22:46:17 -- common/autotest_common.sh@926 -- # '[' -z 1124144 ']' 00:19:32.912 22:46:17 -- common/autotest_common.sh@930 -- # kill -0 1124144 00:19:32.912 22:46:17 -- common/autotest_common.sh@931 -- # uname 00:19:32.912 22:46:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:32.912 22:46:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1124144 00:19:32.912 22:46:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:32.912 22:46:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:32.912 22:46:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1124144' 00:19:32.912 killing process with pid 1124144 00:19:32.912 22:46:17 -- common/autotest_common.sh@945 -- # kill 1124144 00:19:32.912 22:46:17 -- common/autotest_common.sh@950 -- # wait 1124144 00:19:33.172 22:46:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:33.172 22:46:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:33.172 22:46:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:33.172 22:46:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:33.172 22:46:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:33.172 22:46:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.172 22:46:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.172 22:46:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.090 22:46:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:35.090 00:19:35.090 real 0m29.292s 00:19:35.090 user 2m38.343s 00:19:35.090 sys 0m9.730s 00:19:35.090 22:46:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:35.090 22:46:19 -- common/autotest_common.sh@10 -- # set +x 00:19:35.090 ************************************ 00:19:35.090 END TEST nvmf_fio_target 00:19:35.090 ************************************ 00:19:35.090 22:46:19 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:35.090 22:46:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:35.090 22:46:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:35.090 22:46:19 -- common/autotest_common.sh@10 -- # set +x 00:19:35.090 ************************************ 00:19:35.090 START TEST nvmf_bdevio 00:19:35.090 ************************************ 00:19:35.090 22:46:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:35.351 * Looking for test storage... 00:19:35.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:35.351 22:46:19 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.351 22:46:19 -- nvmf/common.sh@7 -- # uname -s 00:19:35.351 22:46:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.351 22:46:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.351 22:46:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.351 22:46:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.351 22:46:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.351 22:46:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.351 22:46:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.351 22:46:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.351 22:46:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.351 22:46:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.351 22:46:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:35.351 22:46:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:35.351 22:46:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.351 22:46:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.351 22:46:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.351 22:46:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.351 22:46:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.351 22:46:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.351 22:46:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.351 22:46:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.351 22:46:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.351 22:46:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.351 22:46:20 -- paths/export.sh@5 -- # export PATH 00:19:35.351 22:46:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.351 22:46:20 -- nvmf/common.sh@46 -- # : 0 00:19:35.351 22:46:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:35.351 22:46:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:35.351 22:46:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:35.351 22:46:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.351 22:46:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.351 22:46:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:35.351 22:46:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:35.351 22:46:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:35.351 22:46:20 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:35.351 22:46:20 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:35.351 22:46:20 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:35.351 22:46:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:35.351 22:46:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.351 22:46:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:35.351 22:46:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:35.351 22:46:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:35.351 22:46:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.351 22:46:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.351 22:46:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.351 22:46:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:35.351 22:46:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:35.351 22:46:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:35.351 22:46:20 -- common/autotest_common.sh@10 -- # set +x 00:19:43.528 22:46:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:43.528 22:46:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:43.528 22:46:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:43.528 22:46:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:43.528 22:46:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:43.528 22:46:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:43.528 22:46:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:43.528 22:46:27 -- nvmf/common.sh@294 -- # net_devs=() 00:19:43.528 22:46:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:43.528 22:46:27 -- nvmf/common.sh@295 -- # e810=() 00:19:43.528 22:46:27 -- nvmf/common.sh@295 -- # local -ga e810 00:19:43.528 22:46:27 -- nvmf/common.sh@296 -- # x722=() 00:19:43.528 22:46:27 -- nvmf/common.sh@296 -- # local -ga x722 00:19:43.528 22:46:27 -- nvmf/common.sh@297 -- # mlx=() 00:19:43.528 22:46:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:43.528 22:46:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:43.528 22:46:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:43.528 22:46:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:43.528 22:46:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:43.528 22:46:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:43.528 22:46:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:43.528 22:46:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:43.528 22:46:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:43.528 22:46:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:43.528 22:46:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:43.528 22:46:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:43.528 22:46:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:43.528 22:46:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:43.528 22:46:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:43.528 22:46:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:43.528 22:46:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:43.528 22:46:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:43.528 22:46:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:43.528 22:46:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:43.528 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:43.528 22:46:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:43.528 22:46:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:43.528 22:46:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.528 22:46:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.528 22:46:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:43.528 22:46:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:43.528 22:46:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:43.528 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:43.528 22:46:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:43.528 22:46:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:43.528 22:46:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.528 22:46:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.528 22:46:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:43.528 22:46:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:43.528 22:46:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:43.528 22:46:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:43.528 22:46:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:43.528 22:46:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.528 22:46:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:43.528 22:46:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.528 22:46:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:43.528 Found net devices under 0000:31:00.0: cvl_0_0 00:19:43.528 22:46:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.529 22:46:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:43.529 22:46:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.529 22:46:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:43.529 22:46:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.529 22:46:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:43.529 Found net devices under 0000:31:00.1: cvl_0_1 00:19:43.529 22:46:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.529 22:46:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:43.529 22:46:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:43.529 22:46:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:43.529 22:46:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:43.529 22:46:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:43.529 22:46:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:43.529 22:46:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:43.529 22:46:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:43.529 22:46:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:43.529 22:46:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:43.529 22:46:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:43.529 22:46:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:43.529 22:46:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:43.529 22:46:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:43.529 22:46:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:43.529 22:46:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:43.529 22:46:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:43.529 22:46:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:43.529 22:46:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:43.529 22:46:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:43.529 22:46:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:43.529 22:46:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:43.529 22:46:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:43.529 22:46:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:43.529 22:46:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:43.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:19:43.529 00:19:43.529 --- 10.0.0.2 ping statistics --- 00:19:43.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.529 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:19:43.529 22:46:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:43.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:19:43.529 00:19:43.529 --- 10.0.0.1 ping statistics --- 00:19:43.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.529 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:19:43.529 22:46:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.529 22:46:28 -- nvmf/common.sh@410 -- # return 0 00:19:43.529 22:46:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:43.529 22:46:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.529 22:46:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:43.529 22:46:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:43.529 22:46:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.529 22:46:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:43.529 22:46:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:43.529 22:46:28 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:43.529 22:46:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:43.529 22:46:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:43.529 22:46:28 -- common/autotest_common.sh@10 -- # set +x 00:19:43.529 22:46:28 -- nvmf/common.sh@469 -- # nvmfpid=1133542 00:19:43.529 22:46:28 -- nvmf/common.sh@470 -- # waitforlisten 1133542 00:19:43.529 22:46:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:43.529 22:46:28 -- common/autotest_common.sh@819 -- # '[' -z 1133542 ']' 00:19:43.529 22:46:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.529 22:46:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:43.529 22:46:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.529 22:46:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:43.529 22:46:28 -- common/autotest_common.sh@10 -- # set +x 00:19:43.529 [2024-04-15 22:46:28.202618] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:43.529 [2024-04-15 22:46:28.202681] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.529 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.529 [2024-04-15 22:46:28.298568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:43.790 [2024-04-15 22:46:28.389654] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:43.790 [2024-04-15 22:46:28.389810] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.790 [2024-04-15 22:46:28.389819] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.790 [2024-04-15 22:46:28.389827] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.790 [2024-04-15 22:46:28.389991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:43.790 [2024-04-15 22:46:28.390154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:43.790 [2024-04-15 22:46:28.390315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:43.790 [2024-04-15 22:46:28.390315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:44.362 22:46:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:44.362 22:46:28 -- common/autotest_common.sh@852 -- # return 0 00:19:44.362 22:46:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:44.362 22:46:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:44.362 22:46:28 -- common/autotest_common.sh@10 -- # set +x 00:19:44.362 22:46:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.362 22:46:29 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:44.362 22:46:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.362 22:46:29 -- common/autotest_common.sh@10 -- # set +x 00:19:44.362 [2024-04-15 22:46:29.043967] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.362 22:46:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.362 22:46:29 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:44.362 22:46:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.362 22:46:29 -- common/autotest_common.sh@10 -- # set +x 00:19:44.362 Malloc0 00:19:44.362 22:46:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.362 22:46:29 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:44.362 22:46:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.362 22:46:29 -- common/autotest_common.sh@10 -- # set +x 00:19:44.362 22:46:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.362 22:46:29 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:44.362 22:46:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.362 22:46:29 -- common/autotest_common.sh@10 -- # set +x 00:19:44.362 22:46:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.362 22:46:29 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:44.362 22:46:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.362 22:46:29 -- common/autotest_common.sh@10 -- # set +x 00:19:44.362 [2024-04-15 22:46:29.109130] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.362 22:46:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.362 22:46:29 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:44.362 22:46:29 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:44.362 22:46:29 -- nvmf/common.sh@520 -- # config=() 00:19:44.362 22:46:29 -- nvmf/common.sh@520 -- # local subsystem config 00:19:44.362 22:46:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:44.362 22:46:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:44.362 { 00:19:44.362 "params": { 00:19:44.362 "name": "Nvme$subsystem", 00:19:44.362 "trtype": "$TEST_TRANSPORT", 00:19:44.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.362 "adrfam": "ipv4", 00:19:44.362 "trsvcid": "$NVMF_PORT", 00:19:44.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.362 "hdgst": ${hdgst:-false}, 00:19:44.362 "ddgst": ${ddgst:-false} 00:19:44.362 }, 00:19:44.362 "method": "bdev_nvme_attach_controller" 00:19:44.362 } 00:19:44.362 EOF 00:19:44.362 )") 00:19:44.362 22:46:29 -- nvmf/common.sh@542 -- # cat 00:19:44.362 22:46:29 -- nvmf/common.sh@544 -- # jq . 00:19:44.362 22:46:29 -- nvmf/common.sh@545 -- # IFS=, 00:19:44.362 22:46:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:44.362 "params": { 00:19:44.362 "name": "Nvme1", 00:19:44.362 "trtype": "tcp", 00:19:44.362 "traddr": "10.0.0.2", 00:19:44.362 "adrfam": "ipv4", 00:19:44.362 "trsvcid": "4420", 00:19:44.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:44.362 "hdgst": false, 00:19:44.362 "ddgst": false 00:19:44.362 }, 00:19:44.362 "method": "bdev_nvme_attach_controller" 00:19:44.362 }' 00:19:44.362 [2024-04-15 22:46:29.168863] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:44.362 [2024-04-15 22:46:29.168961] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133641 ] 00:19:44.622 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.622 [2024-04-15 22:46:29.244572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:44.622 [2024-04-15 22:46:29.317624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.622 [2024-04-15 22:46:29.317765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.622 [2024-04-15 22:46:29.317769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.883 [2024-04-15 22:46:29.461620] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:44.883 [2024-04-15 22:46:29.461653] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:44.883 I/O targets: 00:19:44.883 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:44.883 00:19:44.883 00:19:44.883 CUnit - A unit testing framework for C - Version 2.1-3 00:19:44.883 http://cunit.sourceforge.net/ 00:19:44.883 00:19:44.883 00:19:44.883 Suite: bdevio tests on: Nvme1n1 00:19:44.883 Test: blockdev write read block ...passed 00:19:44.883 Test: blockdev write zeroes read block ...passed 00:19:44.883 Test: blockdev write zeroes read no split ...passed 00:19:44.883 Test: blockdev write zeroes read split ...passed 00:19:44.883 Test: blockdev write zeroes read split partial ...passed 00:19:44.883 Test: blockdev reset ...[2024-04-15 22:46:29.669453] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:44.883 [2024-04-15 22:46:29.669500] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b2080 (9): Bad file descriptor 00:19:44.883 [2024-04-15 22:46:29.688967] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:44.883 passed 00:19:44.883 Test: blockdev write read 8 blocks ...passed 00:19:45.144 Test: blockdev write read size > 128k ...passed 00:19:45.144 Test: blockdev write read invalid size ...passed 00:19:45.144 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:45.144 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:45.144 Test: blockdev write read max offset ...passed 00:19:45.144 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:45.144 Test: blockdev writev readv 8 blocks ...passed 00:19:45.144 Test: blockdev writev readv 30 x 1block ...passed 00:19:45.144 Test: blockdev writev readv block ...passed 00:19:45.144 Test: blockdev writev readv size > 128k ...passed 00:19:45.144 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:45.144 Test: blockdev comparev and writev ...[2024-04-15 22:46:29.908439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:45.144 [2024-04-15 22:46:29.908465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.144 [2024-04-15 22:46:29.908475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:45.144 [2024-04-15 22:46:29.908481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:45.144 [2024-04-15 22:46:29.908780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:45.144 [2024-04-15 22:46:29.908788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:45.144 [2024-04-15 22:46:29.908797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:45.144 [2024-04-15 22:46:29.908802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:45.144 [2024-04-15 22:46:29.909051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:45.144 [2024-04-15 22:46:29.909059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:45.144 [2024-04-15 22:46:29.909068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:45.144 [2024-04-15 22:46:29.909074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:45.144 [2024-04-15 22:46:29.909323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:45.144 [2024-04-15 22:46:29.909331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:45.144 [2024-04-15 22:46:29.909340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:45.144 [2024-04-15 22:46:29.909345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:45.144 passed 00:19:45.405 Test: blockdev nvme passthru rw ...passed 00:19:45.405 Test: blockdev nvme passthru vendor specific ...[2024-04-15 22:46:29.994017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:45.405 [2024-04-15 22:46:29.994028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:45.405 [2024-04-15 22:46:29.994238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:45.405 [2024-04-15 22:46:29.994245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:45.405 [2024-04-15 22:46:29.994353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:45.405 [2024-04-15 22:46:29.994363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:45.405 [2024-04-15 22:46:29.994474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:45.405 [2024-04-15 22:46:29.994481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:45.405 passed 00:19:45.405 Test: blockdev nvme admin passthru ...passed 00:19:45.405 Test: blockdev copy ...passed 00:19:45.405 00:19:45.405 Run Summary: Type Total Ran Passed Failed Inactive 00:19:45.405 suites 1 1 n/a 0 0 00:19:45.405 tests 23 23 23 0 0 00:19:45.405 asserts 152 152 152 0 n/a 00:19:45.405 00:19:45.405 Elapsed time = 1.178 seconds 00:19:45.405 22:46:30 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.405 22:46:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:45.405 22:46:30 -- common/autotest_common.sh@10 -- # set +x 00:19:45.405 22:46:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:45.405 22:46:30 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:45.405 22:46:30 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:45.405 22:46:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:45.405 22:46:30 -- nvmf/common.sh@116 -- # sync 00:19:45.405 22:46:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:45.405 22:46:30 -- nvmf/common.sh@119 -- # set +e 00:19:45.405 22:46:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:45.405 22:46:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:45.405 rmmod nvme_tcp 00:19:45.666 rmmod nvme_fabrics 00:19:45.666 rmmod nvme_keyring 00:19:45.666 22:46:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:45.666 22:46:30 -- nvmf/common.sh@123 -- # set -e 00:19:45.666 22:46:30 -- nvmf/common.sh@124 -- # return 0 00:19:45.666 22:46:30 -- nvmf/common.sh@477 -- # '[' -n 1133542 ']' 00:19:45.666 22:46:30 -- nvmf/common.sh@478 -- # killprocess 1133542 00:19:45.666 22:46:30 -- common/autotest_common.sh@926 -- # '[' -z 1133542 ']' 00:19:45.666 22:46:30 -- common/autotest_common.sh@930 -- # kill -0 1133542 00:19:45.666 22:46:30 -- common/autotest_common.sh@931 -- # uname 00:19:45.666 22:46:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:45.666 22:46:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1133542 00:19:45.666 22:46:30 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:45.666 22:46:30 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:45.666 22:46:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1133542' 00:19:45.666 killing process with pid 1133542 00:19:45.666 22:46:30 -- common/autotest_common.sh@945 -- # kill 1133542 00:19:45.666 22:46:30 -- common/autotest_common.sh@950 -- # wait 1133542 00:19:45.666 22:46:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:45.666 22:46:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:45.666 22:46:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:45.666 22:46:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:45.666 22:46:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:45.666 22:46:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.666 22:46:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.666 22:46:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.212 22:46:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:48.212 00:19:48.212 real 0m12.631s 00:19:48.212 user 0m12.537s 00:19:48.212 sys 0m6.548s 00:19:48.212 22:46:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:48.212 22:46:32 -- common/autotest_common.sh@10 -- # set +x 00:19:48.212 ************************************ 00:19:48.212 END TEST nvmf_bdevio 00:19:48.212 ************************************ 00:19:48.212 22:46:32 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:19:48.213 22:46:32 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:48.213 22:46:32 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:48.213 22:46:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:48.213 22:46:32 -- common/autotest_common.sh@10 -- # set +x 00:19:48.213 ************************************ 00:19:48.213 START TEST nvmf_bdevio_no_huge 00:19:48.213 ************************************ 00:19:48.213 22:46:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:48.213 * Looking for test storage... 00:19:48.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:48.213 22:46:32 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:48.213 22:46:32 -- nvmf/common.sh@7 -- # uname -s 00:19:48.213 22:46:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.213 22:46:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.213 22:46:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.213 22:46:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.213 22:46:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.213 22:46:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.213 22:46:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.213 22:46:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.213 22:46:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.213 22:46:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.213 22:46:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:48.213 22:46:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:48.213 22:46:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.213 22:46:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.213 22:46:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:48.213 22:46:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:48.213 22:46:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.213 22:46:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.213 22:46:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.213 22:46:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.213 22:46:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.213 22:46:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.213 22:46:32 -- paths/export.sh@5 -- # export PATH 00:19:48.213 22:46:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.213 22:46:32 -- nvmf/common.sh@46 -- # : 0 00:19:48.213 22:46:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:48.213 22:46:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:48.213 22:46:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:48.213 22:46:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.213 22:46:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.213 22:46:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:48.213 22:46:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:48.213 22:46:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:48.213 22:46:32 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:48.213 22:46:32 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:48.213 22:46:32 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:48.213 22:46:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:48.213 22:46:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.213 22:46:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:48.213 22:46:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:48.213 22:46:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:48.213 22:46:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.213 22:46:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.213 22:46:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.213 22:46:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:48.213 22:46:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:48.213 22:46:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:48.213 22:46:32 -- common/autotest_common.sh@10 -- # set +x 00:19:56.356 22:46:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:56.357 22:46:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:56.357 22:46:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:56.357 22:46:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:56.357 22:46:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:56.357 22:46:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:56.357 22:46:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:56.357 22:46:40 -- nvmf/common.sh@294 -- # net_devs=() 00:19:56.357 22:46:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:56.357 22:46:40 -- nvmf/common.sh@295 -- # e810=() 00:19:56.357 22:46:40 -- nvmf/common.sh@295 -- # local -ga e810 00:19:56.357 22:46:40 -- nvmf/common.sh@296 -- # x722=() 00:19:56.357 22:46:40 -- nvmf/common.sh@296 -- # local -ga x722 00:19:56.357 22:46:40 -- nvmf/common.sh@297 -- # mlx=() 00:19:56.357 22:46:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:56.357 22:46:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:56.357 22:46:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:56.357 22:46:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:56.357 22:46:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:56.357 22:46:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:56.357 22:46:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:56.357 22:46:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:56.357 22:46:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:56.357 22:46:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:56.357 22:46:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:56.357 22:46:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:56.357 22:46:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:56.357 22:46:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:56.357 22:46:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:56.357 22:46:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:56.357 22:46:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:56.357 22:46:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:56.357 22:46:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:56.357 22:46:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:56.357 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:56.357 22:46:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:56.357 22:46:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:56.357 22:46:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.357 22:46:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.357 22:46:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:56.357 22:46:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:56.357 22:46:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:56.357 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:56.357 22:46:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:56.357 22:46:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:56.357 22:46:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.357 22:46:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.357 22:46:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:56.357 22:46:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:56.357 22:46:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:56.357 22:46:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:56.357 22:46:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:56.357 22:46:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.357 22:46:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:56.357 22:46:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.357 22:46:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:56.357 Found net devices under 0000:31:00.0: cvl_0_0 00:19:56.357 22:46:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.357 22:46:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:56.357 22:46:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.357 22:46:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:56.357 22:46:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.357 22:46:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:56.357 Found net devices under 0000:31:00.1: cvl_0_1 00:19:56.357 22:46:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.357 22:46:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:56.357 22:46:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:56.357 22:46:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:56.357 22:46:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:56.357 22:46:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:56.357 22:46:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:56.357 22:46:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:56.357 22:46:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:56.357 22:46:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:56.357 22:46:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:56.357 22:46:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:56.357 22:46:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:56.357 22:46:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:56.357 22:46:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.357 22:46:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:56.357 22:46:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:56.357 22:46:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:56.357 22:46:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:56.357 22:46:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:56.357 22:46:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:56.357 22:46:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:56.357 22:46:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:56.357 22:46:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:56.357 22:46:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:56.357 22:46:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:56.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.754 ms 00:19:56.357 00:19:56.357 --- 10.0.0.2 ping statistics --- 00:19:56.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.357 rtt min/avg/max/mdev = 0.754/0.754/0.754/0.000 ms 00:19:56.357 22:46:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:56.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:19:56.357 00:19:56.357 --- 10.0.0.1 ping statistics --- 00:19:56.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.357 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:19:56.357 22:46:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.357 22:46:40 -- nvmf/common.sh@410 -- # return 0 00:19:56.357 22:46:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:56.357 22:46:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.357 22:46:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:56.357 22:46:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:56.357 22:46:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.357 22:46:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:56.357 22:46:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:56.357 22:46:40 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:56.357 22:46:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:56.357 22:46:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:56.357 22:46:40 -- common/autotest_common.sh@10 -- # set +x 00:19:56.357 22:46:40 -- nvmf/common.sh@469 -- # nvmfpid=1138641 00:19:56.357 22:46:40 -- nvmf/common.sh@470 -- # waitforlisten 1138641 00:19:56.357 22:46:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:56.357 22:46:40 -- common/autotest_common.sh@819 -- # '[' -z 1138641 ']' 00:19:56.357 22:46:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.357 22:46:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:56.357 22:46:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.357 22:46:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:56.357 22:46:40 -- common/autotest_common.sh@10 -- # set +x 00:19:56.357 [2024-04-15 22:46:40.584818] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:56.357 [2024-04-15 22:46:40.584886] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:56.357 [2024-04-15 22:46:40.686604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:56.357 [2024-04-15 22:46:40.791901] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:56.357 [2024-04-15 22:46:40.792060] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.357 [2024-04-15 22:46:40.792069] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.357 [2024-04-15 22:46:40.792078] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.357 [2024-04-15 22:46:40.792246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:56.357 [2024-04-15 22:46:40.792408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:56.357 [2024-04-15 22:46:40.792588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:56.357 [2024-04-15 22:46:40.792640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:56.618 22:46:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:56.618 22:46:41 -- common/autotest_common.sh@852 -- # return 0 00:19:56.618 22:46:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:56.618 22:46:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:56.618 22:46:41 -- common/autotest_common.sh@10 -- # set +x 00:19:56.618 22:46:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.618 22:46:41 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:56.618 22:46:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.618 22:46:41 -- common/autotest_common.sh@10 -- # set +x 00:19:56.879 [2024-04-15 22:46:41.428604] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.879 22:46:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:56.879 22:46:41 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:56.879 22:46:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.879 22:46:41 -- common/autotest_common.sh@10 -- # set +x 00:19:56.879 Malloc0 00:19:56.879 22:46:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:56.879 22:46:41 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:56.879 22:46:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.879 22:46:41 -- common/autotest_common.sh@10 -- # set +x 00:19:56.879 22:46:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:56.879 22:46:41 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:56.879 22:46:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.879 22:46:41 -- common/autotest_common.sh@10 -- # set +x 00:19:56.879 22:46:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:56.879 22:46:41 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:56.879 22:46:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.879 22:46:41 -- common/autotest_common.sh@10 -- # set +x 00:19:56.879 [2024-04-15 22:46:41.482237] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.879 22:46:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:56.879 22:46:41 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:56.879 22:46:41 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:56.879 22:46:41 -- nvmf/common.sh@520 -- # config=() 00:19:56.879 22:46:41 -- nvmf/common.sh@520 -- # local subsystem config 00:19:56.879 22:46:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:56.879 22:46:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:56.879 { 00:19:56.879 "params": { 00:19:56.879 "name": "Nvme$subsystem", 00:19:56.879 "trtype": "$TEST_TRANSPORT", 00:19:56.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.879 "adrfam": "ipv4", 00:19:56.879 "trsvcid": "$NVMF_PORT", 00:19:56.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.879 "hdgst": ${hdgst:-false}, 00:19:56.879 "ddgst": ${ddgst:-false} 00:19:56.879 }, 00:19:56.879 "method": "bdev_nvme_attach_controller" 00:19:56.879 } 00:19:56.879 EOF 00:19:56.879 )") 00:19:56.879 22:46:41 -- nvmf/common.sh@542 -- # cat 00:19:56.879 22:46:41 -- nvmf/common.sh@544 -- # jq . 00:19:56.879 22:46:41 -- nvmf/common.sh@545 -- # IFS=, 00:19:56.879 22:46:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:56.879 "params": { 00:19:56.879 "name": "Nvme1", 00:19:56.879 "trtype": "tcp", 00:19:56.879 "traddr": "10.0.0.2", 00:19:56.879 "adrfam": "ipv4", 00:19:56.879 "trsvcid": "4420", 00:19:56.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.879 "hdgst": false, 00:19:56.879 "ddgst": false 00:19:56.879 }, 00:19:56.879 "method": "bdev_nvme_attach_controller" 00:19:56.879 }' 00:19:56.879 [2024-04-15 22:46:41.535847] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:56.879 [2024-04-15 22:46:41.535918] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1138698 ] 00:19:56.879 [2024-04-15 22:46:41.612411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:57.140 [2024-04-15 22:46:41.708536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.140 [2024-04-15 22:46:41.708677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.140 [2024-04-15 22:46:41.708680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.140 [2024-04-15 22:46:41.891953] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:57.140 [2024-04-15 22:46:41.891979] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:57.140 I/O targets: 00:19:57.140 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:57.140 00:19:57.140 00:19:57.140 CUnit - A unit testing framework for C - Version 2.1-3 00:19:57.140 http://cunit.sourceforge.net/ 00:19:57.140 00:19:57.140 00:19:57.140 Suite: bdevio tests on: Nvme1n1 00:19:57.140 Test: blockdev write read block ...passed 00:19:57.400 Test: blockdev write zeroes read block ...passed 00:19:57.400 Test: blockdev write zeroes read no split ...passed 00:19:57.400 Test: blockdev write zeroes read split ...passed 00:19:57.400 Test: blockdev write zeroes read split partial ...passed 00:19:57.400 Test: blockdev reset ...[2024-04-15 22:46:42.106951] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.400 [2024-04-15 22:46:42.107010] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x774480 (9): Bad file descriptor 00:19:57.660 [2024-04-15 22:46:42.251878] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:57.660 passed 00:19:57.660 Test: blockdev write read 8 blocks ...passed 00:19:57.660 Test: blockdev write read size > 128k ...passed 00:19:57.660 Test: blockdev write read invalid size ...passed 00:19:57.660 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:57.660 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:57.660 Test: blockdev write read max offset ...passed 00:19:57.660 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:57.660 Test: blockdev writev readv 8 blocks ...passed 00:19:57.660 Test: blockdev writev readv 30 x 1block ...passed 00:19:57.920 Test: blockdev writev readv block ...passed 00:19:57.920 Test: blockdev writev readv size > 128k ...passed 00:19:57.920 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:57.920 Test: blockdev comparev and writev ...[2024-04-15 22:46:42.478837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:57.920 [2024-04-15 22:46:42.478861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:57.920 [2024-04-15 22:46:42.478871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:57.920 [2024-04-15 22:46:42.478877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:57.920 [2024-04-15 22:46:42.479418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:57.920 [2024-04-15 22:46:42.479426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:57.920 [2024-04-15 22:46:42.479435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:57.920 [2024-04-15 22:46:42.479440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:57.920 [2024-04-15 22:46:42.480004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:57.920 [2024-04-15 22:46:42.480011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:57.920 [2024-04-15 22:46:42.480021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:57.920 [2024-04-15 22:46:42.480026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:57.920 [2024-04-15 22:46:42.480510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:57.920 [2024-04-15 22:46:42.480516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:57.921 [2024-04-15 22:46:42.480525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:57.921 [2024-04-15 22:46:42.480531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:57.921 passed 00:19:57.921 Test: blockdev nvme passthru rw ...passed 00:19:57.921 Test: blockdev nvme passthru vendor specific ...[2024-04-15 22:46:42.565282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:57.921 [2024-04-15 22:46:42.565291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:57.921 [2024-04-15 22:46:42.565565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:57.921 [2024-04-15 22:46:42.565573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:57.921 [2024-04-15 22:46:42.565992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:57.921 [2024-04-15 22:46:42.565999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:57.921 [2024-04-15 22:46:42.566381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:57.921 [2024-04-15 22:46:42.566388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:57.921 passed 00:19:57.921 Test: blockdev nvme admin passthru ...passed 00:19:57.921 Test: blockdev copy ...passed 00:19:57.921 00:19:57.921 Run Summary: Type Total Ran Passed Failed Inactive 00:19:57.921 suites 1 1 n/a 0 0 00:19:57.921 tests 23 23 23 0 0 00:19:57.921 asserts 152 152 152 0 n/a 00:19:57.921 00:19:57.921 Elapsed time = 1.455 seconds 00:19:58.181 22:46:42 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:58.182 22:46:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.182 22:46:42 -- common/autotest_common.sh@10 -- # set +x 00:19:58.182 22:46:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.182 22:46:42 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:58.182 22:46:42 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:58.182 22:46:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:58.182 22:46:42 -- nvmf/common.sh@116 -- # sync 00:19:58.182 22:46:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:58.182 22:46:42 -- nvmf/common.sh@119 -- # set +e 00:19:58.182 22:46:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:58.182 22:46:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:58.182 rmmod nvme_tcp 00:19:58.182 rmmod nvme_fabrics 00:19:58.182 rmmod nvme_keyring 00:19:58.182 22:46:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:58.182 22:46:42 -- nvmf/common.sh@123 -- # set -e 00:19:58.182 22:46:42 -- nvmf/common.sh@124 -- # return 0 00:19:58.182 22:46:42 -- nvmf/common.sh@477 -- # '[' -n 1138641 ']' 00:19:58.182 22:46:42 -- nvmf/common.sh@478 -- # killprocess 1138641 00:19:58.182 22:46:42 -- common/autotest_common.sh@926 -- # '[' -z 1138641 ']' 00:19:58.182 22:46:42 -- common/autotest_common.sh@930 -- # kill -0 1138641 00:19:58.182 22:46:42 -- common/autotest_common.sh@931 -- # uname 00:19:58.182 22:46:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:58.182 22:46:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1138641 00:19:58.442 22:46:43 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:58.442 22:46:43 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:58.442 22:46:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1138641' 00:19:58.442 killing process with pid 1138641 00:19:58.442 22:46:43 -- common/autotest_common.sh@945 -- # kill 1138641 00:19:58.442 22:46:43 -- common/autotest_common.sh@950 -- # wait 1138641 00:19:58.703 22:46:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:58.703 22:46:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:58.703 22:46:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:58.703 22:46:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:58.703 22:46:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:58.703 22:46:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.703 22:46:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.703 22:46:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.617 22:46:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:00.617 00:20:00.617 real 0m12.827s 00:20:00.617 user 0m14.365s 00:20:00.617 sys 0m6.821s 00:20:00.617 22:46:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.617 22:46:45 -- common/autotest_common.sh@10 -- # set +x 00:20:00.617 ************************************ 00:20:00.617 END TEST nvmf_bdevio_no_huge 00:20:00.617 ************************************ 00:20:00.877 22:46:45 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:00.877 22:46:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:00.877 22:46:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:00.877 22:46:45 -- common/autotest_common.sh@10 -- # set +x 00:20:00.877 ************************************ 00:20:00.877 START TEST nvmf_tls 00:20:00.877 ************************************ 00:20:00.878 22:46:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:00.878 * Looking for test storage... 00:20:00.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:00.878 22:46:45 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:00.878 22:46:45 -- nvmf/common.sh@7 -- # uname -s 00:20:00.878 22:46:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.878 22:46:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.878 22:46:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.878 22:46:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.878 22:46:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.878 22:46:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.878 22:46:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.878 22:46:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.878 22:46:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.878 22:46:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.878 22:46:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:00.878 22:46:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:00.878 22:46:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.878 22:46:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.878 22:46:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:00.878 22:46:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:00.878 22:46:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.878 22:46:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.878 22:46:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.878 22:46:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.878 22:46:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.878 22:46:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.878 22:46:45 -- paths/export.sh@5 -- # export PATH 00:20:00.878 22:46:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.878 22:46:45 -- nvmf/common.sh@46 -- # : 0 00:20:00.878 22:46:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:00.878 22:46:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:00.878 22:46:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:00.878 22:46:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.878 22:46:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.878 22:46:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:00.878 22:46:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:00.878 22:46:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:00.878 22:46:45 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:00.878 22:46:45 -- target/tls.sh@71 -- # nvmftestinit 00:20:00.878 22:46:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:00.878 22:46:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.878 22:46:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:00.878 22:46:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:00.878 22:46:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:00.878 22:46:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.878 22:46:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.878 22:46:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.878 22:46:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:00.878 22:46:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:00.878 22:46:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:00.878 22:46:45 -- common/autotest_common.sh@10 -- # set +x 00:20:09.024 22:46:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:09.024 22:46:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:09.024 22:46:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:09.024 22:46:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:09.024 22:46:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:09.024 22:46:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:09.024 22:46:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:09.024 22:46:53 -- nvmf/common.sh@294 -- # net_devs=() 00:20:09.024 22:46:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:09.024 22:46:53 -- nvmf/common.sh@295 -- # e810=() 00:20:09.024 22:46:53 -- nvmf/common.sh@295 -- # local -ga e810 00:20:09.024 22:46:53 -- nvmf/common.sh@296 -- # x722=() 00:20:09.024 22:46:53 -- nvmf/common.sh@296 -- # local -ga x722 00:20:09.024 22:46:53 -- nvmf/common.sh@297 -- # mlx=() 00:20:09.024 22:46:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:09.024 22:46:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:09.024 22:46:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:09.024 22:46:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:09.024 22:46:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:09.024 22:46:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:09.024 22:46:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:09.024 22:46:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:09.024 22:46:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:09.024 22:46:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:09.024 22:46:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:09.024 22:46:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:09.024 22:46:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:09.024 22:46:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:09.024 22:46:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:09.024 22:46:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:09.024 22:46:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:09.024 22:46:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:09.024 22:46:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:09.024 22:46:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:09.025 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:09.025 22:46:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:09.025 22:46:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:09.025 22:46:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.025 22:46:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.025 22:46:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:09.025 22:46:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:09.025 22:46:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:09.025 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:09.025 22:46:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:09.025 22:46:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:09.025 22:46:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.025 22:46:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.025 22:46:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:09.025 22:46:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:09.025 22:46:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:09.025 22:46:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:09.025 22:46:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:09.025 22:46:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.025 22:46:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:09.025 22:46:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.025 22:46:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:09.025 Found net devices under 0000:31:00.0: cvl_0_0 00:20:09.025 22:46:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.025 22:46:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:09.025 22:46:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.025 22:46:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:09.025 22:46:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.025 22:46:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:09.025 Found net devices under 0000:31:00.1: cvl_0_1 00:20:09.025 22:46:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.025 22:46:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:09.025 22:46:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:09.025 22:46:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:09.025 22:46:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:09.025 22:46:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:09.025 22:46:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:09.025 22:46:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.025 22:46:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:09.025 22:46:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:09.025 22:46:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:09.025 22:46:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:09.025 22:46:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:09.025 22:46:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:09.025 22:46:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.025 22:46:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:09.025 22:46:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:09.025 22:46:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:09.025 22:46:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:09.025 22:46:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:09.025 22:46:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:09.025 22:46:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:09.025 22:46:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:09.025 22:46:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:09.025 22:46:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:09.025 22:46:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:09.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:20:09.025 00:20:09.025 --- 10.0.0.2 ping statistics --- 00:20:09.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.025 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:20:09.025 22:46:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:09.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:20:09.025 00:20:09.025 --- 10.0.0.1 ping statistics --- 00:20:09.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.025 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:20:09.025 22:46:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.025 22:46:53 -- nvmf/common.sh@410 -- # return 0 00:20:09.025 22:46:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:09.025 22:46:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.025 22:46:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:09.025 22:46:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:09.025 22:46:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.025 22:46:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:09.025 22:46:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:09.025 22:46:53 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:09.025 22:46:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:09.025 22:46:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:09.025 22:46:53 -- common/autotest_common.sh@10 -- # set +x 00:20:09.025 22:46:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:09.025 22:46:53 -- nvmf/common.sh@469 -- # nvmfpid=1143731 00:20:09.025 22:46:53 -- nvmf/common.sh@470 -- # waitforlisten 1143731 00:20:09.025 22:46:53 -- common/autotest_common.sh@819 -- # '[' -z 1143731 ']' 00:20:09.025 22:46:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.025 22:46:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:09.025 22:46:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.025 22:46:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:09.025 22:46:53 -- common/autotest_common.sh@10 -- # set +x 00:20:09.025 [2024-04-15 22:46:53.701199] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:09.025 [2024-04-15 22:46:53.701256] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.025 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.025 [2024-04-15 22:46:53.776188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.287 [2024-04-15 22:46:53.842078] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:09.287 [2024-04-15 22:46:53.842194] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.287 [2024-04-15 22:46:53.842201] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.287 [2024-04-15 22:46:53.842208] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.287 [2024-04-15 22:46:53.842225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.859 22:46:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:09.859 22:46:54 -- common/autotest_common.sh@852 -- # return 0 00:20:09.859 22:46:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:09.859 22:46:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:09.859 22:46:54 -- common/autotest_common.sh@10 -- # set +x 00:20:09.859 22:46:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.859 22:46:54 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:20:09.859 22:46:54 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:09.859 true 00:20:09.859 22:46:54 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:09.859 22:46:54 -- target/tls.sh@82 -- # jq -r .tls_version 00:20:10.153 22:46:54 -- target/tls.sh@82 -- # version=0 00:20:10.153 22:46:54 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:20:10.153 22:46:54 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:10.153 22:46:54 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:10.153 22:46:54 -- target/tls.sh@90 -- # jq -r .tls_version 00:20:10.444 22:46:55 -- target/tls.sh@90 -- # version=13 00:20:10.444 22:46:55 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:20:10.444 22:46:55 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:10.444 22:46:55 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:10.444 22:46:55 -- target/tls.sh@98 -- # jq -r .tls_version 00:20:10.705 22:46:55 -- target/tls.sh@98 -- # version=7 00:20:10.705 22:46:55 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:20:10.705 22:46:55 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:10.705 22:46:55 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:10.705 22:46:55 -- target/tls.sh@105 -- # ktls=false 00:20:10.705 22:46:55 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:20:10.706 22:46:55 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:10.967 22:46:55 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:10.967 22:46:55 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:11.228 22:46:55 -- target/tls.sh@113 -- # ktls=true 00:20:11.228 22:46:55 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:20:11.228 22:46:55 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:11.228 22:46:55 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:11.228 22:46:55 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:20:11.490 22:46:56 -- target/tls.sh@121 -- # ktls=false 00:20:11.490 22:46:56 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:20:11.490 22:46:56 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:20:11.490 22:46:56 -- target/tls.sh@49 -- # local key hash crc 00:20:11.490 22:46:56 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:20:11.490 22:46:56 -- target/tls.sh@51 -- # hash=01 00:20:11.490 22:46:56 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:20:11.490 22:46:56 -- target/tls.sh@52 -- # gzip -1 -c 00:20:11.490 22:46:56 -- target/tls.sh@52 -- # tail -c8 00:20:11.490 22:46:56 -- target/tls.sh@52 -- # head -c 4 00:20:11.490 22:46:56 -- target/tls.sh@52 -- # crc='p$H�' 00:20:11.490 22:46:56 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:11.490 22:46:56 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:20:11.490 22:46:56 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:11.490 22:46:56 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:11.490 22:46:56 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:20:11.490 22:46:56 -- target/tls.sh@49 -- # local key hash crc 00:20:11.490 22:46:56 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:20:11.490 22:46:56 -- target/tls.sh@51 -- # hash=01 00:20:11.490 22:46:56 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:20:11.490 22:46:56 -- target/tls.sh@52 -- # gzip -1 -c 00:20:11.490 22:46:56 -- target/tls.sh@52 -- # tail -c8 00:20:11.490 22:46:56 -- target/tls.sh@52 -- # head -c 4 00:20:11.490 22:46:56 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:20:11.490 22:46:56 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:11.490 22:46:56 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:20:11.490 22:46:56 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:11.490 22:46:56 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:11.490 22:46:56 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:11.490 22:46:56 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:11.490 22:46:56 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:11.490 22:46:56 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:11.490 22:46:56 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:11.490 22:46:56 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:11.490 22:46:56 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:11.752 22:46:56 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:11.752 22:46:56 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:11.752 22:46:56 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:11.752 22:46:56 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:12.013 [2024-04-15 22:46:56.657748] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.013 22:46:56 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:12.013 22:46:56 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:12.275 [2024-04-15 22:46:56.930433] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.275 [2024-04-15 22:46:56.930604] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.275 22:46:56 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:12.275 malloc0 00:20:12.536 22:46:57 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:12.536 22:46:57 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:12.798 22:46:57 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:12.798 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.807 Initializing NVMe Controllers 00:20:22.807 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:22.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:22.807 Initialization complete. Launching workers. 00:20:22.807 ======================================================== 00:20:22.807 Latency(us) 00:20:22.807 Device Information : IOPS MiB/s Average min max 00:20:22.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13937.25 54.44 4592.47 1068.69 5336.22 00:20:22.807 ======================================================== 00:20:22.807 Total : 13937.25 54.44 4592.47 1068.69 5336.22 00:20:22.807 00:20:22.807 22:47:07 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:22.807 22:47:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:22.807 22:47:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:22.807 22:47:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:22.807 22:47:07 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:22.807 22:47:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:22.807 22:47:07 -- target/tls.sh@28 -- # bdevperf_pid=1146509 00:20:22.807 22:47:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:22.807 22:47:07 -- target/tls.sh@31 -- # waitforlisten 1146509 /var/tmp/bdevperf.sock 00:20:22.807 22:47:07 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:22.807 22:47:07 -- common/autotest_common.sh@819 -- # '[' -z 1146509 ']' 00:20:22.807 22:47:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.807 22:47:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:22.807 22:47:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.807 22:47:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:22.807 22:47:07 -- common/autotest_common.sh@10 -- # set +x 00:20:22.807 [2024-04-15 22:47:07.506029] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:22.807 [2024-04-15 22:47:07.506088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146509 ] 00:20:22.807 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.807 [2024-04-15 22:47:07.562126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.807 [2024-04-15 22:47:07.613251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.748 22:47:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:23.748 22:47:08 -- common/autotest_common.sh@852 -- # return 0 00:20:23.748 22:47:08 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:23.748 [2024-04-15 22:47:08.419073] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.748 TLSTESTn1 00:20:23.748 22:47:08 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:24.009 Running I/O for 10 seconds... 00:20:34.008 00:20:34.008 Latency(us) 00:20:34.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.008 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:34.008 Verification LBA range: start 0x0 length 0x2000 00:20:34.008 TLSTESTn1 : 10.02 3216.46 12.56 0.00 0.00 39751.33 3467.95 57671.68 00:20:34.008 =================================================================================================================== 00:20:34.008 Total : 3216.46 12.56 0.00 0.00 39751.33 3467.95 57671.68 00:20:34.008 0 00:20:34.008 22:47:18 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:34.008 22:47:18 -- target/tls.sh@45 -- # killprocess 1146509 00:20:34.008 22:47:18 -- common/autotest_common.sh@926 -- # '[' -z 1146509 ']' 00:20:34.008 22:47:18 -- common/autotest_common.sh@930 -- # kill -0 1146509 00:20:34.008 22:47:18 -- common/autotest_common.sh@931 -- # uname 00:20:34.008 22:47:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:34.008 22:47:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1146509 00:20:34.008 22:47:18 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:34.008 22:47:18 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:34.008 22:47:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1146509' 00:20:34.008 killing process with pid 1146509 00:20:34.008 22:47:18 -- common/autotest_common.sh@945 -- # kill 1146509 00:20:34.008 Received shutdown signal, test time was about 10.000000 seconds 00:20:34.008 00:20:34.008 Latency(us) 00:20:34.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.008 =================================================================================================================== 00:20:34.008 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:34.008 22:47:18 -- common/autotest_common.sh@950 -- # wait 1146509 00:20:34.270 22:47:18 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:34.270 22:47:18 -- common/autotest_common.sh@640 -- # local es=0 00:20:34.270 22:47:18 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:34.270 22:47:18 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:34.270 22:47:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:34.270 22:47:18 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:34.270 22:47:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:34.270 22:47:18 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:34.270 22:47:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:34.270 22:47:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:34.270 22:47:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:34.270 22:47:18 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:20:34.270 22:47:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.270 22:47:18 -- target/tls.sh@28 -- # bdevperf_pid=1148794 00:20:34.270 22:47:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:34.270 22:47:18 -- target/tls.sh@31 -- # waitforlisten 1148794 /var/tmp/bdevperf.sock 00:20:34.270 22:47:18 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:34.270 22:47:18 -- common/autotest_common.sh@819 -- # '[' -z 1148794 ']' 00:20:34.270 22:47:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.270 22:47:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:34.270 22:47:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.270 22:47:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:34.270 22:47:18 -- common/autotest_common.sh@10 -- # set +x 00:20:34.270 [2024-04-15 22:47:18.882597] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:34.270 [2024-04-15 22:47:18.882654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148794 ] 00:20:34.270 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.270 [2024-04-15 22:47:18.939015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.270 [2024-04-15 22:47:18.988989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.212 22:47:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:35.212 22:47:19 -- common/autotest_common.sh@852 -- # return 0 00:20:35.212 22:47:19 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:35.212 [2024-04-15 22:47:19.798860] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.212 [2024-04-15 22:47:19.803388] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:35.212 [2024-04-15 22:47:19.803974] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc0a00 (107): Transport endpoint is not connected 00:20:35.212 [2024-04-15 22:47:19.804968] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc0a00 (9): Bad file descriptor 00:20:35.212 [2024-04-15 22:47:19.805969] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:35.212 [2024-04-15 22:47:19.805976] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:35.212 [2024-04-15 22:47:19.805983] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:35.212 request: 00:20:35.212 { 00:20:35.212 "name": "TLSTEST", 00:20:35.212 "trtype": "tcp", 00:20:35.212 "traddr": "10.0.0.2", 00:20:35.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.212 "adrfam": "ipv4", 00:20:35.212 "trsvcid": "4420", 00:20:35.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.212 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:20:35.212 "method": "bdev_nvme_attach_controller", 00:20:35.212 "req_id": 1 00:20:35.212 } 00:20:35.212 Got JSON-RPC error response 00:20:35.212 response: 00:20:35.212 { 00:20:35.212 "code": -32602, 00:20:35.212 "message": "Invalid parameters" 00:20:35.212 } 00:20:35.212 22:47:19 -- target/tls.sh@36 -- # killprocess 1148794 00:20:35.212 22:47:19 -- common/autotest_common.sh@926 -- # '[' -z 1148794 ']' 00:20:35.212 22:47:19 -- common/autotest_common.sh@930 -- # kill -0 1148794 00:20:35.212 22:47:19 -- common/autotest_common.sh@931 -- # uname 00:20:35.212 22:47:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:35.212 22:47:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1148794 00:20:35.212 22:47:19 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:35.212 22:47:19 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:35.212 22:47:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1148794' 00:20:35.212 killing process with pid 1148794 00:20:35.212 22:47:19 -- common/autotest_common.sh@945 -- # kill 1148794 00:20:35.212 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.212 00:20:35.213 Latency(us) 00:20:35.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.213 =================================================================================================================== 00:20:35.213 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:35.213 22:47:19 -- common/autotest_common.sh@950 -- # wait 1148794 00:20:35.213 22:47:19 -- target/tls.sh@37 -- # return 1 00:20:35.213 22:47:19 -- common/autotest_common.sh@643 -- # es=1 00:20:35.213 22:47:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:35.213 22:47:19 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:35.213 22:47:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:35.213 22:47:19 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:35.213 22:47:19 -- common/autotest_common.sh@640 -- # local es=0 00:20:35.213 22:47:19 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:35.213 22:47:19 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:35.213 22:47:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:35.213 22:47:19 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:35.213 22:47:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:35.213 22:47:20 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:35.213 22:47:20 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:35.213 22:47:20 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:35.213 22:47:20 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:35.213 22:47:20 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:35.213 22:47:20 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.213 22:47:20 -- target/tls.sh@28 -- # bdevperf_pid=1148901 00:20:35.213 22:47:20 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.213 22:47:20 -- target/tls.sh@31 -- # waitforlisten 1148901 /var/tmp/bdevperf.sock 00:20:35.213 22:47:20 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.213 22:47:20 -- common/autotest_common.sh@819 -- # '[' -z 1148901 ']' 00:20:35.213 22:47:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.213 22:47:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:35.213 22:47:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.213 22:47:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:35.213 22:47:20 -- common/autotest_common.sh@10 -- # set +x 00:20:35.474 [2024-04-15 22:47:20.053568] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:35.474 [2024-04-15 22:47:20.053672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148901 ] 00:20:35.474 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.474 [2024-04-15 22:47:20.118319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.474 [2024-04-15 22:47:20.169593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.047 22:47:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:36.047 22:47:20 -- common/autotest_common.sh@852 -- # return 0 00:20:36.047 22:47:20 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:36.309 [2024-04-15 22:47:20.959557] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.309 [2024-04-15 22:47:20.963847] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:36.309 [2024-04-15 22:47:20.963872] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:36.309 [2024-04-15 22:47:20.963898] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:36.309 [2024-04-15 22:47:20.964556] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bca00 (107): Transport endpoint is not connected 00:20:36.309 [2024-04-15 22:47:20.965547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bca00 (9): Bad file descriptor 00:20:36.309 [2024-04-15 22:47:20.966549] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:36.309 [2024-04-15 22:47:20.966556] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:36.309 [2024-04-15 22:47:20.966563] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:36.309 request: 00:20:36.309 { 00:20:36.309 "name": "TLSTEST", 00:20:36.309 "trtype": "tcp", 00:20:36.309 "traddr": "10.0.0.2", 00:20:36.309 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:36.309 "adrfam": "ipv4", 00:20:36.309 "trsvcid": "4420", 00:20:36.309 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.309 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:36.309 "method": "bdev_nvme_attach_controller", 00:20:36.309 "req_id": 1 00:20:36.309 } 00:20:36.309 Got JSON-RPC error response 00:20:36.309 response: 00:20:36.309 { 00:20:36.309 "code": -32602, 00:20:36.309 "message": "Invalid parameters" 00:20:36.309 } 00:20:36.309 22:47:20 -- target/tls.sh@36 -- # killprocess 1148901 00:20:36.309 22:47:20 -- common/autotest_common.sh@926 -- # '[' -z 1148901 ']' 00:20:36.309 22:47:20 -- common/autotest_common.sh@930 -- # kill -0 1148901 00:20:36.309 22:47:20 -- common/autotest_common.sh@931 -- # uname 00:20:36.309 22:47:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:36.309 22:47:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1148901 00:20:36.309 22:47:21 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:36.309 22:47:21 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:36.309 22:47:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1148901' 00:20:36.309 killing process with pid 1148901 00:20:36.309 22:47:21 -- common/autotest_common.sh@945 -- # kill 1148901 00:20:36.309 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.309 00:20:36.309 Latency(us) 00:20:36.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.309 =================================================================================================================== 00:20:36.309 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:36.309 22:47:21 -- common/autotest_common.sh@950 -- # wait 1148901 00:20:36.570 22:47:21 -- target/tls.sh@37 -- # return 1 00:20:36.570 22:47:21 -- common/autotest_common.sh@643 -- # es=1 00:20:36.570 22:47:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:36.570 22:47:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:36.570 22:47:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:36.570 22:47:21 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:36.570 22:47:21 -- common/autotest_common.sh@640 -- # local es=0 00:20:36.570 22:47:21 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:36.570 22:47:21 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:36.570 22:47:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:36.570 22:47:21 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:36.570 22:47:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:36.570 22:47:21 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:36.570 22:47:21 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:36.570 22:47:21 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:36.570 22:47:21 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:36.570 22:47:21 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:36.570 22:47:21 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:36.570 22:47:21 -- target/tls.sh@28 -- # bdevperf_pid=1149234 00:20:36.570 22:47:21 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:36.570 22:47:21 -- target/tls.sh@31 -- # waitforlisten 1149234 /var/tmp/bdevperf.sock 00:20:36.570 22:47:21 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:36.570 22:47:21 -- common/autotest_common.sh@819 -- # '[' -z 1149234 ']' 00:20:36.570 22:47:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.570 22:47:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:36.570 22:47:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.570 22:47:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:36.570 22:47:21 -- common/autotest_common.sh@10 -- # set +x 00:20:36.570 [2024-04-15 22:47:21.200735] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:36.570 [2024-04-15 22:47:21.200789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1149234 ] 00:20:36.570 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.570 [2024-04-15 22:47:21.256746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.570 [2024-04-15 22:47:21.307601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.513 22:47:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:37.513 22:47:21 -- common/autotest_common.sh@852 -- # return 0 00:20:37.513 22:47:21 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:37.513 [2024-04-15 22:47:22.113469] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.513 [2024-04-15 22:47:22.117806] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:37.513 [2024-04-15 22:47:22.117827] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:37.513 [2024-04-15 22:47:22.117852] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:37.513 [2024-04-15 22:47:22.118445] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x813a00 (107): Transport endpoint is not connected 00:20:37.513 [2024-04-15 22:47:22.119440] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x813a00 (9): Bad file descriptor 00:20:37.513 [2024-04-15 22:47:22.120441] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:37.513 [2024-04-15 22:47:22.120448] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:37.513 [2024-04-15 22:47:22.120455] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:37.513 request: 00:20:37.513 { 00:20:37.513 "name": "TLSTEST", 00:20:37.513 "trtype": "tcp", 00:20:37.513 "traddr": "10.0.0.2", 00:20:37.513 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.513 "adrfam": "ipv4", 00:20:37.513 "trsvcid": "4420", 00:20:37.513 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:37.513 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:37.513 "method": "bdev_nvme_attach_controller", 00:20:37.513 "req_id": 1 00:20:37.513 } 00:20:37.513 Got JSON-RPC error response 00:20:37.513 response: 00:20:37.513 { 00:20:37.513 "code": -32602, 00:20:37.513 "message": "Invalid parameters" 00:20:37.513 } 00:20:37.513 22:47:22 -- target/tls.sh@36 -- # killprocess 1149234 00:20:37.513 22:47:22 -- common/autotest_common.sh@926 -- # '[' -z 1149234 ']' 00:20:37.513 22:47:22 -- common/autotest_common.sh@930 -- # kill -0 1149234 00:20:37.513 22:47:22 -- common/autotest_common.sh@931 -- # uname 00:20:37.513 22:47:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:37.513 22:47:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1149234 00:20:37.513 22:47:22 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:37.513 22:47:22 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:37.513 22:47:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1149234' 00:20:37.513 killing process with pid 1149234 00:20:37.513 22:47:22 -- common/autotest_common.sh@945 -- # kill 1149234 00:20:37.513 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.513 00:20:37.513 Latency(us) 00:20:37.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.514 =================================================================================================================== 00:20:37.514 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:37.514 22:47:22 -- common/autotest_common.sh@950 -- # wait 1149234 00:20:37.514 22:47:22 -- target/tls.sh@37 -- # return 1 00:20:37.514 22:47:22 -- common/autotest_common.sh@643 -- # es=1 00:20:37.514 22:47:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:37.514 22:47:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:37.514 22:47:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:37.514 22:47:22 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.514 22:47:22 -- common/autotest_common.sh@640 -- # local es=0 00:20:37.514 22:47:22 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.514 22:47:22 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:37.514 22:47:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:37.514 22:47:22 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:37.514 22:47:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:37.514 22:47:22 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.514 22:47:22 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:37.514 22:47:22 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:37.514 22:47:22 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:37.514 22:47:22 -- target/tls.sh@23 -- # psk= 00:20:37.514 22:47:22 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.514 22:47:22 -- target/tls.sh@28 -- # bdevperf_pid=1149491 00:20:37.514 22:47:22 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.514 22:47:22 -- target/tls.sh@31 -- # waitforlisten 1149491 /var/tmp/bdevperf.sock 00:20:37.514 22:47:22 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:37.514 22:47:22 -- common/autotest_common.sh@819 -- # '[' -z 1149491 ']' 00:20:37.514 22:47:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.514 22:47:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:37.514 22:47:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.514 22:47:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:37.514 22:47:22 -- common/autotest_common.sh@10 -- # set +x 00:20:37.775 [2024-04-15 22:47:22.364435] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:37.775 [2024-04-15 22:47:22.364535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1149491 ] 00:20:37.775 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.775 [2024-04-15 22:47:22.424778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.775 [2024-04-15 22:47:22.475268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.347 22:47:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:38.347 22:47:23 -- common/autotest_common.sh@852 -- # return 0 00:20:38.347 22:47:23 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:38.608 [2024-04-15 22:47:23.259692] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:38.608 [2024-04-15 22:47:23.261517] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc6340 (9): Bad file descriptor 00:20:38.608 [2024-04-15 22:47:23.262516] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.608 [2024-04-15 22:47:23.262523] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:38.608 [2024-04-15 22:47:23.262531] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.608 request: 00:20:38.608 { 00:20:38.608 "name": "TLSTEST", 00:20:38.608 "trtype": "tcp", 00:20:38.608 "traddr": "10.0.0.2", 00:20:38.608 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.608 "adrfam": "ipv4", 00:20:38.608 "trsvcid": "4420", 00:20:38.608 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.608 "method": "bdev_nvme_attach_controller", 00:20:38.608 "req_id": 1 00:20:38.608 } 00:20:38.608 Got JSON-RPC error response 00:20:38.608 response: 00:20:38.608 { 00:20:38.608 "code": -32602, 00:20:38.608 "message": "Invalid parameters" 00:20:38.608 } 00:20:38.608 22:47:23 -- target/tls.sh@36 -- # killprocess 1149491 00:20:38.608 22:47:23 -- common/autotest_common.sh@926 -- # '[' -z 1149491 ']' 00:20:38.608 22:47:23 -- common/autotest_common.sh@930 -- # kill -0 1149491 00:20:38.608 22:47:23 -- common/autotest_common.sh@931 -- # uname 00:20:38.608 22:47:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:38.608 22:47:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1149491 00:20:38.608 22:47:23 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:38.608 22:47:23 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:38.608 22:47:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1149491' 00:20:38.608 killing process with pid 1149491 00:20:38.608 22:47:23 -- common/autotest_common.sh@945 -- # kill 1149491 00:20:38.608 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.608 00:20:38.608 Latency(us) 00:20:38.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.608 =================================================================================================================== 00:20:38.608 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:38.608 22:47:23 -- common/autotest_common.sh@950 -- # wait 1149491 00:20:38.869 22:47:23 -- target/tls.sh@37 -- # return 1 00:20:38.869 22:47:23 -- common/autotest_common.sh@643 -- # es=1 00:20:38.869 22:47:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:38.869 22:47:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:38.869 22:47:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:38.869 22:47:23 -- target/tls.sh@167 -- # killprocess 1143731 00:20:38.869 22:47:23 -- common/autotest_common.sh@926 -- # '[' -z 1143731 ']' 00:20:38.869 22:47:23 -- common/autotest_common.sh@930 -- # kill -0 1143731 00:20:38.869 22:47:23 -- common/autotest_common.sh@931 -- # uname 00:20:38.869 22:47:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:38.869 22:47:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1143731 00:20:38.869 22:47:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:38.869 22:47:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:38.869 22:47:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1143731' 00:20:38.869 killing process with pid 1143731 00:20:38.869 22:47:23 -- common/autotest_common.sh@945 -- # kill 1143731 00:20:38.869 22:47:23 -- common/autotest_common.sh@950 -- # wait 1143731 00:20:38.869 22:47:23 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:20:38.869 22:47:23 -- target/tls.sh@49 -- # local key hash crc 00:20:38.869 22:47:23 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:38.869 22:47:23 -- target/tls.sh@51 -- # hash=02 00:20:38.869 22:47:23 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:20:38.869 22:47:23 -- target/tls.sh@52 -- # gzip -1 -c 00:20:38.869 22:47:23 -- target/tls.sh@52 -- # tail -c8 00:20:38.869 22:47:23 -- target/tls.sh@52 -- # head -c 4 00:20:38.869 22:47:23 -- target/tls.sh@52 -- # crc='�e�'\''' 00:20:38.869 22:47:23 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:38.869 22:47:23 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:20:38.869 22:47:23 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:38.869 22:47:23 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:38.869 22:47:23 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:38.869 22:47:23 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:38.869 22:47:23 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:38.869 22:47:23 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:20:38.869 22:47:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:38.869 22:47:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:38.869 22:47:23 -- common/autotest_common.sh@10 -- # set +x 00:20:38.869 22:47:23 -- nvmf/common.sh@469 -- # nvmfpid=1149688 00:20:38.869 22:47:23 -- nvmf/common.sh@470 -- # waitforlisten 1149688 00:20:38.870 22:47:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:39.130 22:47:23 -- common/autotest_common.sh@819 -- # '[' -z 1149688 ']' 00:20:39.130 22:47:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.130 22:47:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:39.130 22:47:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.130 22:47:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:39.130 22:47:23 -- common/autotest_common.sh@10 -- # set +x 00:20:39.130 [2024-04-15 22:47:23.724635] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:39.130 [2024-04-15 22:47:23.724691] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.130 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.130 [2024-04-15 22:47:23.798970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.130 [2024-04-15 22:47:23.861974] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:39.130 [2024-04-15 22:47:23.862096] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.130 [2024-04-15 22:47:23.862104] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.130 [2024-04-15 22:47:23.862116] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.130 [2024-04-15 22:47:23.862142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.702 22:47:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:39.702 22:47:24 -- common/autotest_common.sh@852 -- # return 0 00:20:39.702 22:47:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:39.702 22:47:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:39.702 22:47:24 -- common/autotest_common.sh@10 -- # set +x 00:20:39.962 22:47:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.962 22:47:24 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:39.962 22:47:24 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:39.962 22:47:24 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:39.962 [2024-04-15 22:47:24.656911] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.962 22:47:24 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:40.223 22:47:24 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:40.223 [2024-04-15 22:47:24.945638] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:40.223 [2024-04-15 22:47:24.945810] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.223 22:47:24 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:40.517 malloc0 00:20:40.517 22:47:25 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:40.517 22:47:25 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:40.779 22:47:25 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:40.779 22:47:25 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:40.779 22:47:25 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:40.779 22:47:25 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:40.779 22:47:25 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:40.779 22:47:25 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:40.779 22:47:25 -- target/tls.sh@28 -- # bdevperf_pid=1150052 00:20:40.779 22:47:25 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:40.779 22:47:25 -- target/tls.sh@31 -- # waitforlisten 1150052 /var/tmp/bdevperf.sock 00:20:40.779 22:47:25 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:40.779 22:47:25 -- common/autotest_common.sh@819 -- # '[' -z 1150052 ']' 00:20:40.779 22:47:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.779 22:47:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:40.779 22:47:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.779 22:47:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:40.779 22:47:25 -- common/autotest_common.sh@10 -- # set +x 00:20:40.779 [2024-04-15 22:47:25.460029] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:40.779 [2024-04-15 22:47:25.460091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1150052 ] 00:20:40.779 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.779 [2024-04-15 22:47:25.515962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.779 [2024-04-15 22:47:25.567096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.719 22:47:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:41.719 22:47:26 -- common/autotest_common.sh@852 -- # return 0 00:20:41.719 22:47:26 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:41.719 [2024-04-15 22:47:26.364936] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:41.719 TLSTESTn1 00:20:41.719 22:47:26 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:41.979 Running I/O for 10 seconds... 00:20:51.977 00:20:51.977 Latency(us) 00:20:51.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.977 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:51.977 Verification LBA range: start 0x0 length 0x2000 00:20:51.977 TLSTESTn1 : 10.03 3186.96 12.45 0.00 0.00 40109.22 6062.08 57234.77 00:20:51.977 =================================================================================================================== 00:20:51.977 Total : 3186.96 12.45 0.00 0.00 40109.22 6062.08 57234.77 00:20:51.977 0 00:20:51.977 22:47:36 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:51.977 22:47:36 -- target/tls.sh@45 -- # killprocess 1150052 00:20:51.977 22:47:36 -- common/autotest_common.sh@926 -- # '[' -z 1150052 ']' 00:20:51.977 22:47:36 -- common/autotest_common.sh@930 -- # kill -0 1150052 00:20:51.977 22:47:36 -- common/autotest_common.sh@931 -- # uname 00:20:51.977 22:47:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:51.977 22:47:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1150052 00:20:51.977 22:47:36 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:51.977 22:47:36 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:51.977 22:47:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1150052' 00:20:51.977 killing process with pid 1150052 00:20:51.977 22:47:36 -- common/autotest_common.sh@945 -- # kill 1150052 00:20:51.977 Received shutdown signal, test time was about 10.000000 seconds 00:20:51.977 00:20:51.977 Latency(us) 00:20:51.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.977 =================================================================================================================== 00:20:51.977 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:51.977 22:47:36 -- common/autotest_common.sh@950 -- # wait 1150052 00:20:51.977 22:47:36 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:51.977 22:47:36 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:51.977 22:47:36 -- common/autotest_common.sh@640 -- # local es=0 00:20:51.977 22:47:36 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:51.977 22:47:36 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:51.978 22:47:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:51.978 22:47:36 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:52.238 22:47:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:52.238 22:47:36 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:52.238 22:47:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:52.238 22:47:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:52.238 22:47:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:52.238 22:47:36 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:52.238 22:47:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:52.238 22:47:36 -- target/tls.sh@28 -- # bdevperf_pid=1152358 00:20:52.238 22:47:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:52.238 22:47:36 -- target/tls.sh@31 -- # waitforlisten 1152358 /var/tmp/bdevperf.sock 00:20:52.238 22:47:36 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:52.238 22:47:36 -- common/autotest_common.sh@819 -- # '[' -z 1152358 ']' 00:20:52.238 22:47:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:52.238 22:47:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:52.238 22:47:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:52.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:52.238 22:47:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:52.238 22:47:36 -- common/autotest_common.sh@10 -- # set +x 00:20:52.238 [2024-04-15 22:47:36.842389] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:52.238 [2024-04-15 22:47:36.842459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1152358 ] 00:20:52.238 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.238 [2024-04-15 22:47:36.899600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.238 [2024-04-15 22:47:36.948431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.808 22:47:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:52.808 22:47:37 -- common/autotest_common.sh@852 -- # return 0 00:20:52.808 22:47:37 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:53.069 [2024-04-15 22:47:37.746365] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:53.069 [2024-04-15 22:47:37.746395] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:53.069 request: 00:20:53.069 { 00:20:53.069 "name": "TLSTEST", 00:20:53.069 "trtype": "tcp", 00:20:53.069 "traddr": "10.0.0.2", 00:20:53.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:53.069 "adrfam": "ipv4", 00:20:53.069 "trsvcid": "4420", 00:20:53.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.069 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:53.069 "method": "bdev_nvme_attach_controller", 00:20:53.069 "req_id": 1 00:20:53.069 } 00:20:53.069 Got JSON-RPC error response 00:20:53.069 response: 00:20:53.069 { 00:20:53.069 "code": -22, 00:20:53.069 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:53.069 } 00:20:53.069 22:47:37 -- target/tls.sh@36 -- # killprocess 1152358 00:20:53.069 22:47:37 -- common/autotest_common.sh@926 -- # '[' -z 1152358 ']' 00:20:53.069 22:47:37 -- common/autotest_common.sh@930 -- # kill -0 1152358 00:20:53.069 22:47:37 -- common/autotest_common.sh@931 -- # uname 00:20:53.069 22:47:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:53.069 22:47:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1152358 00:20:53.069 22:47:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:53.069 22:47:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:53.069 22:47:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1152358' 00:20:53.069 killing process with pid 1152358 00:20:53.069 22:47:37 -- common/autotest_common.sh@945 -- # kill 1152358 00:20:53.069 Received shutdown signal, test time was about 10.000000 seconds 00:20:53.069 00:20:53.069 Latency(us) 00:20:53.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.069 =================================================================================================================== 00:20:53.069 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:53.069 22:47:37 -- common/autotest_common.sh@950 -- # wait 1152358 00:20:53.330 22:47:37 -- target/tls.sh@37 -- # return 1 00:20:53.330 22:47:37 -- common/autotest_common.sh@643 -- # es=1 00:20:53.330 22:47:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:53.330 22:47:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:53.330 22:47:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:53.330 22:47:37 -- target/tls.sh@183 -- # killprocess 1149688 00:20:53.330 22:47:37 -- common/autotest_common.sh@926 -- # '[' -z 1149688 ']' 00:20:53.330 22:47:37 -- common/autotest_common.sh@930 -- # kill -0 1149688 00:20:53.330 22:47:37 -- common/autotest_common.sh@931 -- # uname 00:20:53.330 22:47:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:53.330 22:47:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1149688 00:20:53.330 22:47:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:53.330 22:47:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:53.330 22:47:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1149688' 00:20:53.330 killing process with pid 1149688 00:20:53.330 22:47:37 -- common/autotest_common.sh@945 -- # kill 1149688 00:20:53.330 22:47:37 -- common/autotest_common.sh@950 -- # wait 1149688 00:20:53.330 22:47:38 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:53.330 22:47:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:53.330 22:47:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:53.330 22:47:38 -- common/autotest_common.sh@10 -- # set +x 00:20:53.330 22:47:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:53.330 22:47:38 -- nvmf/common.sh@469 -- # nvmfpid=1152704 00:20:53.330 22:47:38 -- nvmf/common.sh@470 -- # waitforlisten 1152704 00:20:53.330 22:47:38 -- common/autotest_common.sh@819 -- # '[' -z 1152704 ']' 00:20:53.330 22:47:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.330 22:47:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:53.330 22:47:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.330 22:47:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:53.330 22:47:38 -- common/autotest_common.sh@10 -- # set +x 00:20:53.591 [2024-04-15 22:47:38.162627] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:53.591 [2024-04-15 22:47:38.162677] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.591 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.591 [2024-04-15 22:47:38.234838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.591 [2024-04-15 22:47:38.296242] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:53.591 [2024-04-15 22:47:38.296362] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.591 [2024-04-15 22:47:38.296370] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.591 [2024-04-15 22:47:38.296377] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.591 [2024-04-15 22:47:38.296393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.533 22:47:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:54.533 22:47:39 -- common/autotest_common.sh@852 -- # return 0 00:20:54.533 22:47:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:54.533 22:47:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:54.533 22:47:39 -- common/autotest_common.sh@10 -- # set +x 00:20:54.533 22:47:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.533 22:47:39 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:54.533 22:47:39 -- common/autotest_common.sh@640 -- # local es=0 00:20:54.533 22:47:39 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:54.533 22:47:39 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:20:54.533 22:47:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:54.533 22:47:39 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:20:54.533 22:47:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:54.533 22:47:39 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:54.533 22:47:39 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:54.533 22:47:39 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:54.533 [2024-04-15 22:47:39.195754] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.533 22:47:39 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:54.793 22:47:39 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:54.793 [2024-04-15 22:47:39.512558] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:54.793 [2024-04-15 22:47:39.512740] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.793 22:47:39 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:55.054 malloc0 00:20:55.054 22:47:39 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:55.054 22:47:39 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:55.314 [2024-04-15 22:47:39.996650] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:55.314 [2024-04-15 22:47:39.996674] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:55.314 [2024-04-15 22:47:39.996695] subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:55.314 request: 00:20:55.314 { 00:20:55.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.314 "host": "nqn.2016-06.io.spdk:host1", 00:20:55.314 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:55.314 "method": "nvmf_subsystem_add_host", 00:20:55.314 "req_id": 1 00:20:55.314 } 00:20:55.314 Got JSON-RPC error response 00:20:55.314 response: 00:20:55.314 { 00:20:55.314 "code": -32603, 00:20:55.314 "message": "Internal error" 00:20:55.314 } 00:20:55.314 22:47:40 -- common/autotest_common.sh@643 -- # es=1 00:20:55.314 22:47:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:55.314 22:47:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:55.314 22:47:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:55.314 22:47:40 -- target/tls.sh@189 -- # killprocess 1152704 00:20:55.314 22:47:40 -- common/autotest_common.sh@926 -- # '[' -z 1152704 ']' 00:20:55.314 22:47:40 -- common/autotest_common.sh@930 -- # kill -0 1152704 00:20:55.314 22:47:40 -- common/autotest_common.sh@931 -- # uname 00:20:55.314 22:47:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:55.314 22:47:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1152704 00:20:55.314 22:47:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:55.314 22:47:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:55.314 22:47:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1152704' 00:20:55.314 killing process with pid 1152704 00:20:55.314 22:47:40 -- common/autotest_common.sh@945 -- # kill 1152704 00:20:55.314 22:47:40 -- common/autotest_common.sh@950 -- # wait 1152704 00:20:55.575 22:47:40 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:55.575 22:47:40 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:20:55.575 22:47:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:55.575 22:47:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:55.575 22:47:40 -- common/autotest_common.sh@10 -- # set +x 00:20:55.575 22:47:40 -- nvmf/common.sh@469 -- # nvmfpid=1153081 00:20:55.575 22:47:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:55.575 22:47:40 -- nvmf/common.sh@470 -- # waitforlisten 1153081 00:20:55.575 22:47:40 -- common/autotest_common.sh@819 -- # '[' -z 1153081 ']' 00:20:55.575 22:47:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.575 22:47:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:55.575 22:47:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.575 22:47:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:55.575 22:47:40 -- common/autotest_common.sh@10 -- # set +x 00:20:55.575 [2024-04-15 22:47:40.269737] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:55.575 [2024-04-15 22:47:40.269792] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.575 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.575 [2024-04-15 22:47:40.342154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.836 [2024-04-15 22:47:40.403045] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:55.836 [2024-04-15 22:47:40.403169] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.836 [2024-04-15 22:47:40.403178] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.836 [2024-04-15 22:47:40.403185] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.836 [2024-04-15 22:47:40.403201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.407 22:47:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:56.407 22:47:41 -- common/autotest_common.sh@852 -- # return 0 00:20:56.407 22:47:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:56.407 22:47:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:56.407 22:47:41 -- common/autotest_common.sh@10 -- # set +x 00:20:56.407 22:47:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.407 22:47:41 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:56.407 22:47:41 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:56.407 22:47:41 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:56.668 [2024-04-15 22:47:41.237887] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.668 22:47:41 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:56.668 22:47:41 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:56.928 [2024-04-15 22:47:41.554682] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:56.928 [2024-04-15 22:47:41.554863] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.928 22:47:41 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:56.928 malloc0 00:20:56.928 22:47:41 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:57.189 22:47:41 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:57.450 22:47:42 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:57.450 22:47:42 -- target/tls.sh@197 -- # bdevperf_pid=1153445 00:20:57.450 22:47:42 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.450 22:47:42 -- target/tls.sh@200 -- # waitforlisten 1153445 /var/tmp/bdevperf.sock 00:20:57.450 22:47:42 -- common/autotest_common.sh@819 -- # '[' -z 1153445 ']' 00:20:57.450 22:47:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.450 22:47:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:57.450 22:47:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.450 22:47:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:57.450 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:20:57.450 [2024-04-15 22:47:42.041183] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:57.450 [2024-04-15 22:47:42.041232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1153445 ] 00:20:57.450 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.450 [2024-04-15 22:47:42.095575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.450 [2024-04-15 22:47:42.146677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.389 22:47:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:58.389 22:47:42 -- common/autotest_common.sh@852 -- # return 0 00:20:58.389 22:47:42 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:58.389 [2024-04-15 22:47:42.988569] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.389 TLSTESTn1 00:20:58.389 22:47:43 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:58.650 22:47:43 -- target/tls.sh@205 -- # tgtconf='{ 00:20:58.650 "subsystems": [ 00:20:58.650 { 00:20:58.650 "subsystem": "iobuf", 00:20:58.650 "config": [ 00:20:58.650 { 00:20:58.650 "method": "iobuf_set_options", 00:20:58.650 "params": { 00:20:58.650 "small_pool_count": 8192, 00:20:58.650 "large_pool_count": 1024, 00:20:58.650 "small_bufsize": 8192, 00:20:58.650 "large_bufsize": 135168 00:20:58.650 } 00:20:58.650 } 00:20:58.650 ] 00:20:58.650 }, 00:20:58.650 { 00:20:58.650 "subsystem": "sock", 00:20:58.650 "config": [ 00:20:58.650 { 00:20:58.650 "method": "sock_impl_set_options", 00:20:58.650 "params": { 00:20:58.650 "impl_name": "posix", 00:20:58.650 "recv_buf_size": 2097152, 00:20:58.650 "send_buf_size": 2097152, 00:20:58.650 "enable_recv_pipe": true, 00:20:58.650 "enable_quickack": false, 00:20:58.650 "enable_placement_id": 0, 00:20:58.650 "enable_zerocopy_send_server": true, 00:20:58.650 "enable_zerocopy_send_client": false, 00:20:58.650 "zerocopy_threshold": 0, 00:20:58.650 "tls_version": 0, 00:20:58.650 "enable_ktls": false 00:20:58.650 } 00:20:58.650 }, 00:20:58.650 { 00:20:58.650 "method": "sock_impl_set_options", 00:20:58.650 "params": { 00:20:58.650 "impl_name": "ssl", 00:20:58.650 "recv_buf_size": 4096, 00:20:58.650 "send_buf_size": 4096, 00:20:58.650 "enable_recv_pipe": true, 00:20:58.650 "enable_quickack": false, 00:20:58.650 "enable_placement_id": 0, 00:20:58.650 "enable_zerocopy_send_server": true, 00:20:58.650 "enable_zerocopy_send_client": false, 00:20:58.650 "zerocopy_threshold": 0, 00:20:58.650 "tls_version": 0, 00:20:58.650 "enable_ktls": false 00:20:58.650 } 00:20:58.650 } 00:20:58.650 ] 00:20:58.650 }, 00:20:58.650 { 00:20:58.650 "subsystem": "vmd", 00:20:58.650 "config": [] 00:20:58.650 }, 00:20:58.650 { 00:20:58.650 "subsystem": "accel", 00:20:58.650 "config": [ 00:20:58.650 { 00:20:58.650 "method": "accel_set_options", 00:20:58.650 "params": { 00:20:58.650 "small_cache_size": 128, 00:20:58.650 "large_cache_size": 16, 00:20:58.650 "task_count": 2048, 00:20:58.650 "sequence_count": 2048, 00:20:58.650 "buf_count": 2048 00:20:58.650 } 00:20:58.650 } 00:20:58.650 ] 00:20:58.650 }, 00:20:58.650 { 00:20:58.650 "subsystem": "bdev", 00:20:58.650 "config": [ 00:20:58.650 { 00:20:58.650 "method": "bdev_set_options", 00:20:58.650 "params": { 00:20:58.650 "bdev_io_pool_size": 65535, 00:20:58.650 "bdev_io_cache_size": 256, 00:20:58.650 "bdev_auto_examine": true, 00:20:58.650 "iobuf_small_cache_size": 128, 00:20:58.650 "iobuf_large_cache_size": 16 00:20:58.650 } 00:20:58.650 }, 00:20:58.650 { 00:20:58.650 "method": "bdev_raid_set_options", 00:20:58.650 "params": { 00:20:58.650 "process_window_size_kb": 1024 00:20:58.650 } 00:20:58.650 }, 00:20:58.650 { 00:20:58.650 "method": "bdev_iscsi_set_options", 00:20:58.650 "params": { 00:20:58.650 "timeout_sec": 30 00:20:58.650 } 00:20:58.650 }, 00:20:58.650 { 00:20:58.650 "method": "bdev_nvme_set_options", 00:20:58.650 "params": { 00:20:58.650 "action_on_timeout": "none", 00:20:58.650 "timeout_us": 0, 00:20:58.650 "timeout_admin_us": 0, 00:20:58.650 "keep_alive_timeout_ms": 10000, 00:20:58.650 "transport_retry_count": 4, 00:20:58.650 "arbitration_burst": 0, 00:20:58.650 "low_priority_weight": 0, 00:20:58.650 "medium_priority_weight": 0, 00:20:58.650 "high_priority_weight": 0, 00:20:58.650 "nvme_adminq_poll_period_us": 10000, 00:20:58.650 "nvme_ioq_poll_period_us": 0, 00:20:58.650 "io_queue_requests": 0, 00:20:58.650 "delay_cmd_submit": true, 00:20:58.650 "bdev_retry_count": 3, 00:20:58.650 "transport_ack_timeout": 0, 00:20:58.650 "ctrlr_loss_timeout_sec": 0, 00:20:58.650 "reconnect_delay_sec": 0, 00:20:58.650 "fast_io_fail_timeout_sec": 0, 00:20:58.650 "generate_uuids": false, 00:20:58.650 "transport_tos": 0, 00:20:58.650 "io_path_stat": false, 00:20:58.650 "allow_accel_sequence": false 00:20:58.650 } 00:20:58.650 }, 00:20:58.650 { 00:20:58.650 "method": "bdev_nvme_set_hotplug", 00:20:58.650 "params": { 00:20:58.650 "period_us": 100000, 00:20:58.650 "enable": false 00:20:58.650 } 00:20:58.650 }, 00:20:58.650 { 00:20:58.650 "method": "bdev_malloc_create", 00:20:58.650 "params": { 00:20:58.650 "name": "malloc0", 00:20:58.650 "num_blocks": 8192, 00:20:58.650 "block_size": 4096, 00:20:58.650 "physical_block_size": 4096, 00:20:58.650 "uuid": "af4ff01c-a6af-4aa9-906e-c1d926ede164", 00:20:58.650 "optimal_io_boundary": 0 00:20:58.650 } 00:20:58.650 }, 00:20:58.650 { 00:20:58.650 "method": "bdev_wait_for_examine" 00:20:58.650 } 00:20:58.650 ] 00:20:58.650 }, 00:20:58.650 { 00:20:58.650 "subsystem": "nbd", 00:20:58.650 "config": [] 00:20:58.650 }, 00:20:58.650 { 00:20:58.650 "subsystem": "scheduler", 00:20:58.650 "config": [ 00:20:58.650 { 00:20:58.650 "method": "framework_set_scheduler", 00:20:58.650 "params": { 00:20:58.650 "name": "static" 00:20:58.650 } 00:20:58.650 } 00:20:58.650 ] 00:20:58.650 }, 00:20:58.650 { 00:20:58.650 "subsystem": "nvmf", 00:20:58.650 "config": [ 00:20:58.650 { 00:20:58.650 "method": "nvmf_set_config", 00:20:58.650 "params": { 00:20:58.650 "discovery_filter": "match_any", 00:20:58.650 "admin_cmd_passthru": { 00:20:58.650 "identify_ctrlr": false 00:20:58.650 } 00:20:58.650 } 00:20:58.650 }, 00:20:58.650 { 00:20:58.650 "method": "nvmf_set_max_subsystems", 00:20:58.650 "params": { 00:20:58.650 "max_subsystems": 1024 00:20:58.650 } 00:20:58.650 }, 00:20:58.650 { 00:20:58.650 "method": "nvmf_set_crdt", 00:20:58.650 "params": { 00:20:58.650 "crdt1": 0, 00:20:58.650 "crdt2": 0, 00:20:58.650 "crdt3": 0 00:20:58.650 } 00:20:58.650 }, 00:20:58.650 { 00:20:58.650 "method": "nvmf_create_transport", 00:20:58.650 "params": { 00:20:58.650 "trtype": "TCP", 00:20:58.650 "max_queue_depth": 128, 00:20:58.650 "max_io_qpairs_per_ctrlr": 127, 00:20:58.650 "in_capsule_data_size": 4096, 00:20:58.650 "max_io_size": 131072, 00:20:58.650 "io_unit_size": 131072, 00:20:58.650 "max_aq_depth": 128, 00:20:58.650 "num_shared_buffers": 511, 00:20:58.650 "buf_cache_size": 4294967295, 00:20:58.651 "dif_insert_or_strip": false, 00:20:58.651 "zcopy": false, 00:20:58.651 "c2h_success": false, 00:20:58.651 "sock_priority": 0, 00:20:58.651 "abort_timeout_sec": 1 00:20:58.651 } 00:20:58.651 }, 00:20:58.651 { 00:20:58.651 "method": "nvmf_create_subsystem", 00:20:58.651 "params": { 00:20:58.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.651 "allow_any_host": false, 00:20:58.651 "serial_number": "SPDK00000000000001", 00:20:58.651 "model_number": "SPDK bdev Controller", 00:20:58.651 "max_namespaces": 10, 00:20:58.651 "min_cntlid": 1, 00:20:58.651 "max_cntlid": 65519, 00:20:58.651 "ana_reporting": false 00:20:58.651 } 00:20:58.651 }, 00:20:58.651 { 00:20:58.651 "method": "nvmf_subsystem_add_host", 00:20:58.651 "params": { 00:20:58.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.651 "host": "nqn.2016-06.io.spdk:host1", 00:20:58.651 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:58.651 } 00:20:58.651 }, 00:20:58.651 { 00:20:58.651 "method": "nvmf_subsystem_add_ns", 00:20:58.651 "params": { 00:20:58.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.651 "namespace": { 00:20:58.651 "nsid": 1, 00:20:58.651 "bdev_name": "malloc0", 00:20:58.651 "nguid": "AF4FF01CA6AF4AA9906EC1D926EDE164", 00:20:58.651 "uuid": "af4ff01c-a6af-4aa9-906e-c1d926ede164" 00:20:58.651 } 00:20:58.651 } 00:20:58.651 }, 00:20:58.651 { 00:20:58.651 "method": "nvmf_subsystem_add_listener", 00:20:58.651 "params": { 00:20:58.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.651 "listen_address": { 00:20:58.651 "trtype": "TCP", 00:20:58.651 "adrfam": "IPv4", 00:20:58.651 "traddr": "10.0.0.2", 00:20:58.651 "trsvcid": "4420" 00:20:58.651 }, 00:20:58.651 "secure_channel": true 00:20:58.651 } 00:20:58.651 } 00:20:58.651 ] 00:20:58.651 } 00:20:58.651 ] 00:20:58.651 }' 00:20:58.651 22:47:43 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:58.911 22:47:43 -- target/tls.sh@206 -- # bdevperfconf='{ 00:20:58.911 "subsystems": [ 00:20:58.911 { 00:20:58.911 "subsystem": "iobuf", 00:20:58.911 "config": [ 00:20:58.911 { 00:20:58.911 "method": "iobuf_set_options", 00:20:58.911 "params": { 00:20:58.911 "small_pool_count": 8192, 00:20:58.911 "large_pool_count": 1024, 00:20:58.911 "small_bufsize": 8192, 00:20:58.911 "large_bufsize": 135168 00:20:58.911 } 00:20:58.911 } 00:20:58.911 ] 00:20:58.911 }, 00:20:58.911 { 00:20:58.911 "subsystem": "sock", 00:20:58.911 "config": [ 00:20:58.911 { 00:20:58.911 "method": "sock_impl_set_options", 00:20:58.911 "params": { 00:20:58.911 "impl_name": "posix", 00:20:58.911 "recv_buf_size": 2097152, 00:20:58.911 "send_buf_size": 2097152, 00:20:58.911 "enable_recv_pipe": true, 00:20:58.911 "enable_quickack": false, 00:20:58.911 "enable_placement_id": 0, 00:20:58.911 "enable_zerocopy_send_server": true, 00:20:58.911 "enable_zerocopy_send_client": false, 00:20:58.911 "zerocopy_threshold": 0, 00:20:58.911 "tls_version": 0, 00:20:58.911 "enable_ktls": false 00:20:58.911 } 00:20:58.911 }, 00:20:58.911 { 00:20:58.911 "method": "sock_impl_set_options", 00:20:58.911 "params": { 00:20:58.911 "impl_name": "ssl", 00:20:58.911 "recv_buf_size": 4096, 00:20:58.911 "send_buf_size": 4096, 00:20:58.911 "enable_recv_pipe": true, 00:20:58.911 "enable_quickack": false, 00:20:58.911 "enable_placement_id": 0, 00:20:58.911 "enable_zerocopy_send_server": true, 00:20:58.911 "enable_zerocopy_send_client": false, 00:20:58.911 "zerocopy_threshold": 0, 00:20:58.911 "tls_version": 0, 00:20:58.911 "enable_ktls": false 00:20:58.911 } 00:20:58.911 } 00:20:58.911 ] 00:20:58.911 }, 00:20:58.911 { 00:20:58.911 "subsystem": "vmd", 00:20:58.911 "config": [] 00:20:58.911 }, 00:20:58.911 { 00:20:58.911 "subsystem": "accel", 00:20:58.911 "config": [ 00:20:58.911 { 00:20:58.911 "method": "accel_set_options", 00:20:58.911 "params": { 00:20:58.911 "small_cache_size": 128, 00:20:58.911 "large_cache_size": 16, 00:20:58.911 "task_count": 2048, 00:20:58.911 "sequence_count": 2048, 00:20:58.911 "buf_count": 2048 00:20:58.911 } 00:20:58.911 } 00:20:58.911 ] 00:20:58.911 }, 00:20:58.911 { 00:20:58.911 "subsystem": "bdev", 00:20:58.911 "config": [ 00:20:58.911 { 00:20:58.911 "method": "bdev_set_options", 00:20:58.911 "params": { 00:20:58.911 "bdev_io_pool_size": 65535, 00:20:58.911 "bdev_io_cache_size": 256, 00:20:58.911 "bdev_auto_examine": true, 00:20:58.911 "iobuf_small_cache_size": 128, 00:20:58.911 "iobuf_large_cache_size": 16 00:20:58.911 } 00:20:58.911 }, 00:20:58.911 { 00:20:58.911 "method": "bdev_raid_set_options", 00:20:58.911 "params": { 00:20:58.912 "process_window_size_kb": 1024 00:20:58.912 } 00:20:58.912 }, 00:20:58.912 { 00:20:58.912 "method": "bdev_iscsi_set_options", 00:20:58.912 "params": { 00:20:58.912 "timeout_sec": 30 00:20:58.912 } 00:20:58.912 }, 00:20:58.912 { 00:20:58.912 "method": "bdev_nvme_set_options", 00:20:58.912 "params": { 00:20:58.912 "action_on_timeout": "none", 00:20:58.912 "timeout_us": 0, 00:20:58.912 "timeout_admin_us": 0, 00:20:58.912 "keep_alive_timeout_ms": 10000, 00:20:58.912 "transport_retry_count": 4, 00:20:58.912 "arbitration_burst": 0, 00:20:58.912 "low_priority_weight": 0, 00:20:58.912 "medium_priority_weight": 0, 00:20:58.912 "high_priority_weight": 0, 00:20:58.912 "nvme_adminq_poll_period_us": 10000, 00:20:58.912 "nvme_ioq_poll_period_us": 0, 00:20:58.912 "io_queue_requests": 512, 00:20:58.912 "delay_cmd_submit": true, 00:20:58.912 "bdev_retry_count": 3, 00:20:58.912 "transport_ack_timeout": 0, 00:20:58.912 "ctrlr_loss_timeout_sec": 0, 00:20:58.912 "reconnect_delay_sec": 0, 00:20:58.912 "fast_io_fail_timeout_sec": 0, 00:20:58.912 "generate_uuids": false, 00:20:58.912 "transport_tos": 0, 00:20:58.912 "io_path_stat": false, 00:20:58.912 "allow_accel_sequence": false 00:20:58.912 } 00:20:58.912 }, 00:20:58.912 { 00:20:58.912 "method": "bdev_nvme_attach_controller", 00:20:58.912 "params": { 00:20:58.912 "name": "TLSTEST", 00:20:58.912 "trtype": "TCP", 00:20:58.912 "adrfam": "IPv4", 00:20:58.912 "traddr": "10.0.0.2", 00:20:58.912 "trsvcid": "4420", 00:20:58.912 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.912 "prchk_reftag": false, 00:20:58.912 "prchk_guard": false, 00:20:58.912 "ctrlr_loss_timeout_sec": 0, 00:20:58.912 "reconnect_delay_sec": 0, 00:20:58.912 "fast_io_fail_timeout_sec": 0, 00:20:58.912 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:58.912 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.912 "hdgst": false, 00:20:58.912 "ddgst": false 00:20:58.912 } 00:20:58.912 }, 00:20:58.912 { 00:20:58.912 "method": "bdev_nvme_set_hotplug", 00:20:58.912 "params": { 00:20:58.912 "period_us": 100000, 00:20:58.912 "enable": false 00:20:58.912 } 00:20:58.912 }, 00:20:58.912 { 00:20:58.912 "method": "bdev_wait_for_examine" 00:20:58.912 } 00:20:58.912 ] 00:20:58.912 }, 00:20:58.912 { 00:20:58.912 "subsystem": "nbd", 00:20:58.912 "config": [] 00:20:58.912 } 00:20:58.912 ] 00:20:58.912 }' 00:20:58.912 22:47:43 -- target/tls.sh@208 -- # killprocess 1153445 00:20:58.912 22:47:43 -- common/autotest_common.sh@926 -- # '[' -z 1153445 ']' 00:20:58.912 22:47:43 -- common/autotest_common.sh@930 -- # kill -0 1153445 00:20:58.912 22:47:43 -- common/autotest_common.sh@931 -- # uname 00:20:58.912 22:47:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:58.912 22:47:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1153445 00:20:58.912 22:47:43 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:58.912 22:47:43 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:58.912 22:47:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1153445' 00:20:58.912 killing process with pid 1153445 00:20:58.912 22:47:43 -- common/autotest_common.sh@945 -- # kill 1153445 00:20:58.912 Received shutdown signal, test time was about 10.000000 seconds 00:20:58.912 00:20:58.912 Latency(us) 00:20:58.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.912 =================================================================================================================== 00:20:58.912 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:58.912 22:47:43 -- common/autotest_common.sh@950 -- # wait 1153445 00:20:58.912 22:47:43 -- target/tls.sh@209 -- # killprocess 1153081 00:20:58.912 22:47:43 -- common/autotest_common.sh@926 -- # '[' -z 1153081 ']' 00:20:58.912 22:47:43 -- common/autotest_common.sh@930 -- # kill -0 1153081 00:20:58.912 22:47:43 -- common/autotest_common.sh@931 -- # uname 00:20:58.912 22:47:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:58.912 22:47:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1153081 00:20:59.172 22:47:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:59.172 22:47:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:59.172 22:47:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1153081' 00:20:59.172 killing process with pid 1153081 00:20:59.172 22:47:43 -- common/autotest_common.sh@945 -- # kill 1153081 00:20:59.172 22:47:43 -- common/autotest_common.sh@950 -- # wait 1153081 00:20:59.172 22:47:43 -- target/tls.sh@212 -- # echo '{ 00:20:59.172 "subsystems": [ 00:20:59.172 { 00:20:59.172 "subsystem": "iobuf", 00:20:59.172 "config": [ 00:20:59.172 { 00:20:59.172 "method": "iobuf_set_options", 00:20:59.172 "params": { 00:20:59.172 "small_pool_count": 8192, 00:20:59.172 "large_pool_count": 1024, 00:20:59.172 "small_bufsize": 8192, 00:20:59.172 "large_bufsize": 135168 00:20:59.172 } 00:20:59.172 } 00:20:59.172 ] 00:20:59.172 }, 00:20:59.172 { 00:20:59.172 "subsystem": "sock", 00:20:59.172 "config": [ 00:20:59.172 { 00:20:59.172 "method": "sock_impl_set_options", 00:20:59.172 "params": { 00:20:59.172 "impl_name": "posix", 00:20:59.172 "recv_buf_size": 2097152, 00:20:59.172 "send_buf_size": 2097152, 00:20:59.172 "enable_recv_pipe": true, 00:20:59.172 "enable_quickack": false, 00:20:59.172 "enable_placement_id": 0, 00:20:59.173 "enable_zerocopy_send_server": true, 00:20:59.173 "enable_zerocopy_send_client": false, 00:20:59.173 "zerocopy_threshold": 0, 00:20:59.173 "tls_version": 0, 00:20:59.173 "enable_ktls": false 00:20:59.173 } 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "method": "sock_impl_set_options", 00:20:59.173 "params": { 00:20:59.173 "impl_name": "ssl", 00:20:59.173 "recv_buf_size": 4096, 00:20:59.173 "send_buf_size": 4096, 00:20:59.173 "enable_recv_pipe": true, 00:20:59.173 "enable_quickack": false, 00:20:59.173 "enable_placement_id": 0, 00:20:59.173 "enable_zerocopy_send_server": true, 00:20:59.173 "enable_zerocopy_send_client": false, 00:20:59.173 "zerocopy_threshold": 0, 00:20:59.173 "tls_version": 0, 00:20:59.173 "enable_ktls": false 00:20:59.173 } 00:20:59.173 } 00:20:59.173 ] 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "subsystem": "vmd", 00:20:59.173 "config": [] 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "subsystem": "accel", 00:20:59.173 "config": [ 00:20:59.173 { 00:20:59.173 "method": "accel_set_options", 00:20:59.173 "params": { 00:20:59.173 "small_cache_size": 128, 00:20:59.173 "large_cache_size": 16, 00:20:59.173 "task_count": 2048, 00:20:59.173 "sequence_count": 2048, 00:20:59.173 "buf_count": 2048 00:20:59.173 } 00:20:59.173 } 00:20:59.173 ] 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "subsystem": "bdev", 00:20:59.173 "config": [ 00:20:59.173 { 00:20:59.173 "method": "bdev_set_options", 00:20:59.173 "params": { 00:20:59.173 "bdev_io_pool_size": 65535, 00:20:59.173 "bdev_io_cache_size": 256, 00:20:59.173 "bdev_auto_examine": true, 00:20:59.173 "iobuf_small_cache_size": 128, 00:20:59.173 "iobuf_large_cache_size": 16 00:20:59.173 } 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "method": "bdev_raid_set_options", 00:20:59.173 "params": { 00:20:59.173 "process_window_size_kb": 1024 00:20:59.173 } 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "method": "bdev_iscsi_set_options", 00:20:59.173 "params": { 00:20:59.173 "timeout_sec": 30 00:20:59.173 } 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "method": "bdev_nvme_set_options", 00:20:59.173 "params": { 00:20:59.173 "action_on_timeout": "none", 00:20:59.173 "timeout_us": 0, 00:20:59.173 "timeout_admin_us": 0, 00:20:59.173 "keep_alive_timeout_ms": 10000, 00:20:59.173 "transport_retry_count": 4, 00:20:59.173 "arbitration_burst": 0, 00:20:59.173 "low_priority_weight": 0, 00:20:59.173 "medium_priority_weight": 0, 00:20:59.173 "high_priority_weight": 0, 00:20:59.173 "nvme_adminq_poll_period_us": 10000, 00:20:59.173 "nvme_ioq_poll_period_us": 0, 00:20:59.173 "io_queue_requests": 0, 00:20:59.173 "delay_cmd_submit": true, 00:20:59.173 "bdev_retry_count": 3, 00:20:59.173 "transport_ack_timeout": 0, 00:20:59.173 "ctrlr_loss_timeout_sec": 0, 00:20:59.173 "reconnect_delay_sec": 0, 00:20:59.173 "fast_io_fail_timeout_sec": 0, 00:20:59.173 "generate_uuids": false, 00:20:59.173 "transport_tos": 0, 00:20:59.173 "io_path_stat": false, 00:20:59.173 "allow_accel_sequence": false 00:20:59.173 } 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "method": "bdev_nvme_set_hotplug", 00:20:59.173 "params": { 00:20:59.173 "period_us": 100000, 00:20:59.173 "enable": false 00:20:59.173 } 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "method": "bdev_malloc_create", 00:20:59.173 "params": { 00:20:59.173 "name": "malloc0", 00:20:59.173 "num_blocks": 8192, 00:20:59.173 "block_size": 4096, 00:20:59.173 "physical_block_size": 4096, 00:20:59.173 "uuid": "af4ff01c-a6af-4aa9-906e-c1d926ede164", 00:20:59.173 "optimal_io_boundary": 0 00:20:59.173 } 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "method": "bdev_wait_for_examine" 00:20:59.173 } 00:20:59.173 ] 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "subsystem": "nbd", 00:20:59.173 "config": [] 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "subsystem": "scheduler", 00:20:59.173 "config": [ 00:20:59.173 { 00:20:59.173 "method": "framework_set_scheduler", 00:20:59.173 "params": { 00:20:59.173 "name": "static" 00:20:59.173 } 00:20:59.173 } 00:20:59.173 ] 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "subsystem": "nvmf", 00:20:59.173 "config": [ 00:20:59.173 { 00:20:59.173 "method": "nvmf_set_config", 00:20:59.173 "params": { 00:20:59.173 "discovery_filter": "match_any", 00:20:59.173 "admin_cmd_passthru": { 00:20:59.173 "identify_ctrlr": false 00:20:59.173 } 00:20:59.173 } 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "method": "nvmf_set_max_subsystems", 00:20:59.173 "params": { 00:20:59.173 "max_subsystems": 1024 00:20:59.173 } 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "method": "nvmf_set_crdt", 00:20:59.173 "params": { 00:20:59.173 "crdt1": 0, 00:20:59.173 "crdt2": 0, 00:20:59.173 "crdt3": 0 00:20:59.173 } 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "method": "nvmf_create_transport", 00:20:59.173 "params": { 00:20:59.173 "trtype": "TCP", 00:20:59.173 "max_queue_depth": 128, 00:20:59.173 "max_io_qpairs_per_ctrlr": 127, 00:20:59.173 "in_capsule_data_size": 4096, 00:20:59.173 "max_io_size": 131072, 00:20:59.173 "io_unit_size": 131072, 00:20:59.173 "max_aq_depth": 128, 00:20:59.173 "num_shared_buffers": 511, 00:20:59.173 "buf_cache_size": 4294967295, 00:20:59.173 "dif_insert_or_strip": false, 00:20:59.173 "zcopy": false, 00:20:59.173 "c2h_success": false, 00:20:59.173 "sock_priority": 0, 00:20:59.173 "abort_timeout_sec": 1 00:20:59.173 } 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "method": "nvmf_create_subsystem", 00:20:59.173 "params": { 00:20:59.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.173 "allow_any_host": false, 00:20:59.173 "serial_number": "SPDK00000000000001", 00:20:59.173 "model_number": "SPDK bdev Controller", 00:20:59.173 "max_namespaces": 10, 00:20:59.173 "min_cntlid": 1, 00:20:59.173 "max_cntlid": 65519, 00:20:59.173 "ana_reporting": false 00:20:59.173 } 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "method": "nvmf_subsystem_add_host", 00:20:59.173 "params": { 00:20:59.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.173 "host": "nqn.2016-06.io.spdk:host1", 00:20:59.173 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:59.173 } 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "method": "nvmf_subsystem_add_ns", 00:20:59.173 "params": { 00:20:59.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.173 "namespace": { 00:20:59.173 "nsid": 1, 00:20:59.173 "bdev_name": "malloc0", 00:20:59.173 "nguid": "AF4FF01CA6AF4AA9906EC1D926EDE164", 00:20:59.173 "uuid": "af4ff01c-a6af-4aa9-906e-c1d926ede164" 00:20:59.173 } 00:20:59.173 } 00:20:59.173 }, 00:20:59.173 { 00:20:59.173 "method": "nvmf_subsystem_add_listener", 00:20:59.173 "params": { 00:20:59.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.173 "listen_address": { 00:20:59.173 "trtype": "TCP", 00:20:59.173 "adrfam": "IPv4", 00:20:59.173 "traddr": "10.0.0.2", 00:20:59.173 "trsvcid": "4420" 00:20:59.173 }, 00:20:59.173 "secure_channel": true 00:20:59.173 } 00:20:59.173 } 00:20:59.173 ] 00:20:59.173 } 00:20:59.173 ] 00:20:59.173 }' 00:20:59.173 22:47:43 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:59.173 22:47:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:59.173 22:47:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:59.173 22:47:43 -- common/autotest_common.sh@10 -- # set +x 00:20:59.173 22:47:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:59.173 22:47:43 -- nvmf/common.sh@469 -- # nvmfpid=1153809 00:20:59.173 22:47:43 -- nvmf/common.sh@470 -- # waitforlisten 1153809 00:20:59.173 22:47:43 -- common/autotest_common.sh@819 -- # '[' -z 1153809 ']' 00:20:59.173 22:47:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.173 22:47:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:59.174 22:47:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.174 22:47:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:59.174 22:47:43 -- common/autotest_common.sh@10 -- # set +x 00:20:59.174 [2024-04-15 22:47:43.932626] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:59.174 [2024-04-15 22:47:43.932676] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.174 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.434 [2024-04-15 22:47:44.001919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.434 [2024-04-15 22:47:44.063406] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:59.434 [2024-04-15 22:47:44.063523] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.434 [2024-04-15 22:47:44.063531] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.434 [2024-04-15 22:47:44.063538] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.434 [2024-04-15 22:47:44.063562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.695 [2024-04-15 22:47:44.244393] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.695 [2024-04-15 22:47:44.276407] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:59.695 [2024-04-15 22:47:44.276578] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.955 22:47:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:59.955 22:47:44 -- common/autotest_common.sh@852 -- # return 0 00:20:59.955 22:47:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:59.955 22:47:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:59.955 22:47:44 -- common/autotest_common.sh@10 -- # set +x 00:20:59.955 22:47:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.955 22:47:44 -- target/tls.sh@216 -- # bdevperf_pid=1154036 00:20:59.955 22:47:44 -- target/tls.sh@217 -- # waitforlisten 1154036 /var/tmp/bdevperf.sock 00:20:59.955 22:47:44 -- common/autotest_common.sh@819 -- # '[' -z 1154036 ']' 00:20:59.955 22:47:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.955 22:47:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:59.955 22:47:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.955 22:47:44 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:59.956 22:47:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:59.956 22:47:44 -- common/autotest_common.sh@10 -- # set +x 00:20:59.956 22:47:44 -- target/tls.sh@213 -- # echo '{ 00:20:59.956 "subsystems": [ 00:20:59.956 { 00:20:59.956 "subsystem": "iobuf", 00:20:59.956 "config": [ 00:20:59.956 { 00:20:59.956 "method": "iobuf_set_options", 00:20:59.956 "params": { 00:20:59.956 "small_pool_count": 8192, 00:20:59.956 "large_pool_count": 1024, 00:20:59.956 "small_bufsize": 8192, 00:20:59.956 "large_bufsize": 135168 00:20:59.956 } 00:20:59.956 } 00:20:59.956 ] 00:20:59.956 }, 00:20:59.956 { 00:20:59.956 "subsystem": "sock", 00:20:59.956 "config": [ 00:20:59.956 { 00:20:59.956 "method": "sock_impl_set_options", 00:20:59.956 "params": { 00:20:59.956 "impl_name": "posix", 00:20:59.956 "recv_buf_size": 2097152, 00:20:59.956 "send_buf_size": 2097152, 00:20:59.956 "enable_recv_pipe": true, 00:20:59.956 "enable_quickack": false, 00:20:59.956 "enable_placement_id": 0, 00:20:59.956 "enable_zerocopy_send_server": true, 00:20:59.956 "enable_zerocopy_send_client": false, 00:20:59.956 "zerocopy_threshold": 0, 00:20:59.956 "tls_version": 0, 00:20:59.956 "enable_ktls": false 00:20:59.956 } 00:20:59.956 }, 00:20:59.956 { 00:20:59.956 "method": "sock_impl_set_options", 00:20:59.956 "params": { 00:20:59.956 "impl_name": "ssl", 00:20:59.956 "recv_buf_size": 4096, 00:20:59.956 "send_buf_size": 4096, 00:20:59.956 "enable_recv_pipe": true, 00:20:59.956 "enable_quickack": false, 00:20:59.956 "enable_placement_id": 0, 00:20:59.956 "enable_zerocopy_send_server": true, 00:20:59.956 "enable_zerocopy_send_client": false, 00:20:59.956 "zerocopy_threshold": 0, 00:20:59.956 "tls_version": 0, 00:20:59.956 "enable_ktls": false 00:20:59.956 } 00:20:59.956 } 00:20:59.956 ] 00:20:59.956 }, 00:20:59.956 { 00:20:59.956 "subsystem": "vmd", 00:20:59.956 "config": [] 00:20:59.956 }, 00:20:59.956 { 00:20:59.956 "subsystem": "accel", 00:20:59.956 "config": [ 00:20:59.956 { 00:20:59.956 "method": "accel_set_options", 00:20:59.956 "params": { 00:20:59.956 "small_cache_size": 128, 00:20:59.956 "large_cache_size": 16, 00:20:59.956 "task_count": 2048, 00:20:59.956 "sequence_count": 2048, 00:20:59.956 "buf_count": 2048 00:20:59.956 } 00:20:59.956 } 00:20:59.956 ] 00:20:59.956 }, 00:20:59.956 { 00:20:59.956 "subsystem": "bdev", 00:20:59.956 "config": [ 00:20:59.956 { 00:20:59.956 "method": "bdev_set_options", 00:20:59.956 "params": { 00:20:59.956 "bdev_io_pool_size": 65535, 00:20:59.956 "bdev_io_cache_size": 256, 00:20:59.956 "bdev_auto_examine": true, 00:20:59.956 "iobuf_small_cache_size": 128, 00:20:59.956 "iobuf_large_cache_size": 16 00:20:59.956 } 00:20:59.956 }, 00:20:59.956 { 00:20:59.956 "method": "bdev_raid_set_options", 00:20:59.956 "params": { 00:20:59.956 "process_window_size_kb": 1024 00:20:59.956 } 00:20:59.956 }, 00:20:59.956 { 00:20:59.956 "method": "bdev_iscsi_set_options", 00:20:59.956 "params": { 00:20:59.956 "timeout_sec": 30 00:20:59.956 } 00:20:59.956 }, 00:20:59.956 { 00:20:59.956 "method": "bdev_nvme_set_options", 00:20:59.956 "params": { 00:20:59.956 "action_on_timeout": "none", 00:20:59.956 "timeout_us": 0, 00:20:59.956 "timeout_admin_us": 0, 00:20:59.956 "keep_alive_timeout_ms": 10000, 00:20:59.956 "transport_retry_count": 4, 00:20:59.956 "arbitration_burst": 0, 00:20:59.956 "low_priority_weight": 0, 00:20:59.956 "medium_priority_weight": 0, 00:20:59.956 "high_priority_weight": 0, 00:20:59.956 "nvme_adminq_poll_period_us": 10000, 00:20:59.956 "nvme_ioq_poll_period_us": 0, 00:20:59.956 "io_queue_requests": 512, 00:20:59.956 "delay_cmd_submit": true, 00:20:59.956 "bdev_retry_count": 3, 00:20:59.956 "transport_ack_timeout": 0, 00:20:59.956 "ctrlr_loss_timeout_sec": 0, 00:20:59.956 "reconnect_delay_sec": 0, 00:20:59.956 "fast_io_fail_timeout_sec": 0, 00:20:59.956 "generate_uuids": false, 00:20:59.956 "transport_tos": 0, 00:20:59.956 "io_path_stat": false, 00:20:59.956 "allow_accel_sequence": false 00:20:59.956 } 00:20:59.956 }, 00:20:59.956 { 00:20:59.956 "method": "bdev_nvme_attach_controller", 00:20:59.956 "params": { 00:20:59.956 "name": "TLSTEST", 00:20:59.956 "trtype": "TCP", 00:20:59.956 "adrfam": "IPv4", 00:20:59.956 "traddr": "10.0.0.2", 00:20:59.956 "trsvcid": "4420", 00:20:59.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.956 "prchk_reftag": false, 00:20:59.956 "prchk_guard": false, 00:20:59.956 "ctrlr_loss_timeout_sec": 0, 00:20:59.956 "reconnect_delay_sec": 0, 00:20:59.956 "fast_io_fail_timeout_sec": 0, 00:20:59.956 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:59.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.956 "hdgst": false, 00:20:59.956 "ddgst": false 00:20:59.956 } 00:20:59.956 }, 00:20:59.956 { 00:20:59.956 "method": "bdev_nvme_set_hotplug", 00:20:59.956 "params": { 00:20:59.956 "period_us": 100000, 00:20:59.956 "enable": false 00:20:59.956 } 00:20:59.956 }, 00:20:59.956 { 00:20:59.956 "method": "bdev_wait_for_examine" 00:20:59.956 } 00:20:59.956 ] 00:20:59.956 }, 00:20:59.956 { 00:20:59.956 "subsystem": "nbd", 00:20:59.956 "config": [] 00:20:59.956 } 00:20:59.956 ] 00:20:59.956 }' 00:21:00.217 [2024-04-15 22:47:44.780062] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:21:00.217 [2024-04-15 22:47:44.780111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1154036 ] 00:21:00.217 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.217 [2024-04-15 22:47:44.834337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.217 [2024-04-15 22:47:44.885743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.217 [2024-04-15 22:47:45.002373] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.787 22:47:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:00.787 22:47:45 -- common/autotest_common.sh@852 -- # return 0 00:21:00.787 22:47:45 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:01.047 Running I/O for 10 seconds... 00:21:11.064 00:21:11.064 Latency(us) 00:21:11.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.064 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:11.064 Verification LBA range: start 0x0 length 0x2000 00:21:11.064 TLSTESTn1 : 10.05 3549.86 13.87 0.00 0.00 35988.16 6826.67 55705.60 00:21:11.064 =================================================================================================================== 00:21:11.064 Total : 3549.86 13.87 0.00 0.00 35988.16 6826.67 55705.60 00:21:11.064 0 00:21:11.064 22:47:55 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:11.064 22:47:55 -- target/tls.sh@223 -- # killprocess 1154036 00:21:11.064 22:47:55 -- common/autotest_common.sh@926 -- # '[' -z 1154036 ']' 00:21:11.064 22:47:55 -- common/autotest_common.sh@930 -- # kill -0 1154036 00:21:11.064 22:47:55 -- common/autotest_common.sh@931 -- # uname 00:21:11.064 22:47:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:11.064 22:47:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1154036 00:21:11.064 22:47:55 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:11.064 22:47:55 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:11.064 22:47:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1154036' 00:21:11.064 killing process with pid 1154036 00:21:11.064 22:47:55 -- common/autotest_common.sh@945 -- # kill 1154036 00:21:11.064 Received shutdown signal, test time was about 10.000000 seconds 00:21:11.064 00:21:11.064 Latency(us) 00:21:11.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.064 =================================================================================================================== 00:21:11.064 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:11.064 22:47:55 -- common/autotest_common.sh@950 -- # wait 1154036 00:21:11.391 22:47:55 -- target/tls.sh@224 -- # killprocess 1153809 00:21:11.391 22:47:55 -- common/autotest_common.sh@926 -- # '[' -z 1153809 ']' 00:21:11.391 22:47:55 -- common/autotest_common.sh@930 -- # kill -0 1153809 00:21:11.391 22:47:55 -- common/autotest_common.sh@931 -- # uname 00:21:11.391 22:47:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:11.391 22:47:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1153809 00:21:11.391 22:47:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:11.391 22:47:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:11.391 22:47:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1153809' 00:21:11.391 killing process with pid 1153809 00:21:11.391 22:47:55 -- common/autotest_common.sh@945 -- # kill 1153809 00:21:11.391 22:47:55 -- common/autotest_common.sh@950 -- # wait 1153809 00:21:11.391 22:47:56 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:21:11.391 22:47:56 -- target/tls.sh@227 -- # cleanup 00:21:11.391 22:47:56 -- target/tls.sh@15 -- # process_shm --id 0 00:21:11.391 22:47:56 -- common/autotest_common.sh@796 -- # type=--id 00:21:11.391 22:47:56 -- common/autotest_common.sh@797 -- # id=0 00:21:11.391 22:47:56 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:11.391 22:47:56 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:11.391 22:47:56 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:11.391 22:47:56 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:11.391 22:47:56 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:11.391 22:47:56 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:11.391 nvmf_trace.0 00:21:11.391 22:47:56 -- common/autotest_common.sh@811 -- # return 0 00:21:11.391 22:47:56 -- target/tls.sh@16 -- # killprocess 1154036 00:21:11.391 22:47:56 -- common/autotest_common.sh@926 -- # '[' -z 1154036 ']' 00:21:11.391 22:47:56 -- common/autotest_common.sh@930 -- # kill -0 1154036 00:21:11.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1154036) - No such process 00:21:11.391 22:47:56 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1154036 is not found' 00:21:11.391 Process with pid 1154036 is not found 00:21:11.391 22:47:56 -- target/tls.sh@17 -- # nvmftestfini 00:21:11.391 22:47:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:11.391 22:47:56 -- nvmf/common.sh@116 -- # sync 00:21:11.391 22:47:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:11.391 22:47:56 -- nvmf/common.sh@119 -- # set +e 00:21:11.391 22:47:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:11.391 22:47:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:11.653 rmmod nvme_tcp 00:21:11.653 rmmod nvme_fabrics 00:21:11.653 rmmod nvme_keyring 00:21:11.653 22:47:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:11.653 22:47:56 -- nvmf/common.sh@123 -- # set -e 00:21:11.653 22:47:56 -- nvmf/common.sh@124 -- # return 0 00:21:11.653 22:47:56 -- nvmf/common.sh@477 -- # '[' -n 1153809 ']' 00:21:11.653 22:47:56 -- nvmf/common.sh@478 -- # killprocess 1153809 00:21:11.653 22:47:56 -- common/autotest_common.sh@926 -- # '[' -z 1153809 ']' 00:21:11.653 22:47:56 -- common/autotest_common.sh@930 -- # kill -0 1153809 00:21:11.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1153809) - No such process 00:21:11.653 22:47:56 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1153809 is not found' 00:21:11.653 Process with pid 1153809 is not found 00:21:11.653 22:47:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:11.653 22:47:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:11.653 22:47:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:11.653 22:47:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.653 22:47:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:11.653 22:47:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.653 22:47:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.653 22:47:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.571 22:47:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:13.571 22:47:58 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:13.571 00:21:13.571 real 1m12.867s 00:21:13.571 user 1m46.449s 00:21:13.571 sys 0m26.674s 00:21:13.571 22:47:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:13.571 22:47:58 -- common/autotest_common.sh@10 -- # set +x 00:21:13.571 ************************************ 00:21:13.571 END TEST nvmf_tls 00:21:13.571 ************************************ 00:21:13.571 22:47:58 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:13.571 22:47:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:13.571 22:47:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:13.571 22:47:58 -- common/autotest_common.sh@10 -- # set +x 00:21:13.571 ************************************ 00:21:13.571 START TEST nvmf_fips 00:21:13.571 ************************************ 00:21:13.571 22:47:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:13.973 * Looking for test storage... 00:21:13.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:13.973 22:47:58 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:13.973 22:47:58 -- nvmf/common.sh@7 -- # uname -s 00:21:13.973 22:47:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.973 22:47:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.973 22:47:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.973 22:47:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.973 22:47:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.973 22:47:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.973 22:47:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.973 22:47:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.973 22:47:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.973 22:47:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.973 22:47:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:13.973 22:47:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:13.973 22:47:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.973 22:47:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.973 22:47:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:13.973 22:47:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:13.973 22:47:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.973 22:47:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.973 22:47:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.973 22:47:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.973 22:47:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.973 22:47:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.973 22:47:58 -- paths/export.sh@5 -- # export PATH 00:21:13.973 22:47:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.973 22:47:58 -- nvmf/common.sh@46 -- # : 0 00:21:13.973 22:47:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:13.973 22:47:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:13.973 22:47:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:13.973 22:47:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.973 22:47:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.973 22:47:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:13.973 22:47:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:13.973 22:47:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:13.973 22:47:58 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:13.973 22:47:58 -- fips/fips.sh@89 -- # check_openssl_version 00:21:13.973 22:47:58 -- fips/fips.sh@83 -- # local target=3.0.0 00:21:13.973 22:47:58 -- fips/fips.sh@85 -- # openssl version 00:21:13.973 22:47:58 -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:13.973 22:47:58 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:13.973 22:47:58 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:13.973 22:47:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:13.973 22:47:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:13.973 22:47:58 -- scripts/common.sh@335 -- # IFS=.-: 00:21:13.973 22:47:58 -- scripts/common.sh@335 -- # read -ra ver1 00:21:13.973 22:47:58 -- scripts/common.sh@336 -- # IFS=.-: 00:21:13.973 22:47:58 -- scripts/common.sh@336 -- # read -ra ver2 00:21:13.973 22:47:58 -- scripts/common.sh@337 -- # local 'op=>=' 00:21:13.973 22:47:58 -- scripts/common.sh@339 -- # ver1_l=3 00:21:13.973 22:47:58 -- scripts/common.sh@340 -- # ver2_l=3 00:21:13.973 22:47:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:13.973 22:47:58 -- scripts/common.sh@343 -- # case "$op" in 00:21:13.973 22:47:58 -- scripts/common.sh@347 -- # : 1 00:21:13.973 22:47:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:13.973 22:47:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.973 22:47:58 -- scripts/common.sh@364 -- # decimal 3 00:21:13.973 22:47:58 -- scripts/common.sh@352 -- # local d=3 00:21:13.973 22:47:58 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:13.973 22:47:58 -- scripts/common.sh@354 -- # echo 3 00:21:13.973 22:47:58 -- scripts/common.sh@364 -- # ver1[v]=3 00:21:13.973 22:47:58 -- scripts/common.sh@365 -- # decimal 3 00:21:13.973 22:47:58 -- scripts/common.sh@352 -- # local d=3 00:21:13.973 22:47:58 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:13.973 22:47:58 -- scripts/common.sh@354 -- # echo 3 00:21:13.973 22:47:58 -- scripts/common.sh@365 -- # ver2[v]=3 00:21:13.973 22:47:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:13.973 22:47:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:13.973 22:47:58 -- scripts/common.sh@363 -- # (( v++ )) 00:21:13.973 22:47:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.973 22:47:58 -- scripts/common.sh@364 -- # decimal 0 00:21:13.973 22:47:58 -- scripts/common.sh@352 -- # local d=0 00:21:13.973 22:47:58 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:13.973 22:47:58 -- scripts/common.sh@354 -- # echo 0 00:21:13.973 22:47:58 -- scripts/common.sh@364 -- # ver1[v]=0 00:21:13.973 22:47:58 -- scripts/common.sh@365 -- # decimal 0 00:21:13.973 22:47:58 -- scripts/common.sh@352 -- # local d=0 00:21:13.973 22:47:58 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:13.973 22:47:58 -- scripts/common.sh@354 -- # echo 0 00:21:13.973 22:47:58 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:13.973 22:47:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:13.973 22:47:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:13.973 22:47:58 -- scripts/common.sh@363 -- # (( v++ )) 00:21:13.973 22:47:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.973 22:47:58 -- scripts/common.sh@364 -- # decimal 9 00:21:13.973 22:47:58 -- scripts/common.sh@352 -- # local d=9 00:21:13.973 22:47:58 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:13.973 22:47:58 -- scripts/common.sh@354 -- # echo 9 00:21:13.973 22:47:58 -- scripts/common.sh@364 -- # ver1[v]=9 00:21:13.973 22:47:58 -- scripts/common.sh@365 -- # decimal 0 00:21:13.973 22:47:58 -- scripts/common.sh@352 -- # local d=0 00:21:13.973 22:47:58 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:13.973 22:47:58 -- scripts/common.sh@354 -- # echo 0 00:21:13.973 22:47:58 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:13.973 22:47:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:13.973 22:47:58 -- scripts/common.sh@366 -- # return 0 00:21:13.973 22:47:58 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:13.973 22:47:58 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:13.973 22:47:58 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:13.973 22:47:58 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:13.973 22:47:58 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:13.973 22:47:58 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:13.973 22:47:58 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:13.973 22:47:58 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:13.973 22:47:58 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:13.973 22:47:58 -- fips/fips.sh@114 -- # build_openssl_config 00:21:13.974 22:47:58 -- fips/fips.sh@37 -- # cat 00:21:13.974 22:47:58 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:13.974 22:47:58 -- fips/fips.sh@58 -- # cat - 00:21:13.974 22:47:58 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:13.974 22:47:58 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:13.974 22:47:58 -- fips/fips.sh@117 -- # mapfile -t providers 00:21:13.974 22:47:58 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:21:13.974 22:47:58 -- fips/fips.sh@117 -- # openssl list -providers 00:21:13.974 22:47:58 -- fips/fips.sh@117 -- # grep name 00:21:13.974 22:47:58 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:13.974 22:47:58 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:13.974 22:47:58 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:13.974 22:47:58 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:13.974 22:47:58 -- common/autotest_common.sh@640 -- # local es=0 00:21:13.974 22:47:58 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:13.974 22:47:58 -- fips/fips.sh@128 -- # : 00:21:13.974 22:47:58 -- common/autotest_common.sh@628 -- # local arg=openssl 00:21:13.974 22:47:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:13.974 22:47:58 -- common/autotest_common.sh@632 -- # type -t openssl 00:21:13.974 22:47:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:13.974 22:47:58 -- common/autotest_common.sh@634 -- # type -P openssl 00:21:13.974 22:47:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:13.974 22:47:58 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:21:13.974 22:47:58 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:21:13.974 22:47:58 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:21:13.974 Error setting digest 00:21:13.974 00029676657F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:13.974 00029676657F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:13.974 22:47:58 -- common/autotest_common.sh@643 -- # es=1 00:21:13.974 22:47:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:13.974 22:47:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:13.974 22:47:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:13.974 22:47:58 -- fips/fips.sh@131 -- # nvmftestinit 00:21:13.974 22:47:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:13.974 22:47:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.974 22:47:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:13.974 22:47:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:13.974 22:47:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:13.974 22:47:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.974 22:47:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.974 22:47:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.974 22:47:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:13.974 22:47:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:13.974 22:47:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:13.974 22:47:58 -- common/autotest_common.sh@10 -- # set +x 00:21:22.115 22:48:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:22.115 22:48:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:22.115 22:48:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:22.115 22:48:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:22.115 22:48:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:22.115 22:48:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:22.115 22:48:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:22.115 22:48:06 -- nvmf/common.sh@294 -- # net_devs=() 00:21:22.115 22:48:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:22.115 22:48:06 -- nvmf/common.sh@295 -- # e810=() 00:21:22.115 22:48:06 -- nvmf/common.sh@295 -- # local -ga e810 00:21:22.115 22:48:06 -- nvmf/common.sh@296 -- # x722=() 00:21:22.115 22:48:06 -- nvmf/common.sh@296 -- # local -ga x722 00:21:22.115 22:48:06 -- nvmf/common.sh@297 -- # mlx=() 00:21:22.115 22:48:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:22.115 22:48:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.115 22:48:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.115 22:48:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.115 22:48:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.115 22:48:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.115 22:48:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.115 22:48:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.115 22:48:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.115 22:48:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.115 22:48:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.115 22:48:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.115 22:48:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:22.115 22:48:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:22.115 22:48:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:22.115 22:48:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:22.115 22:48:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:22.115 22:48:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:22.115 22:48:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:22.115 22:48:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:22.115 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:22.115 22:48:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:22.115 22:48:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:22.115 22:48:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.115 22:48:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.115 22:48:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:22.115 22:48:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:22.116 22:48:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:22.116 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:22.116 22:48:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:22.116 22:48:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:22.116 22:48:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.116 22:48:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.116 22:48:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:22.116 22:48:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:22.116 22:48:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:22.116 22:48:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:22.116 22:48:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:22.116 22:48:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.116 22:48:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:22.116 22:48:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.116 22:48:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:22.116 Found net devices under 0000:31:00.0: cvl_0_0 00:21:22.116 22:48:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.116 22:48:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:22.116 22:48:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.116 22:48:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:22.116 22:48:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.116 22:48:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:22.116 Found net devices under 0000:31:00.1: cvl_0_1 00:21:22.116 22:48:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.116 22:48:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:22.116 22:48:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:22.116 22:48:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:22.116 22:48:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:22.116 22:48:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:22.116 22:48:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.116 22:48:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.116 22:48:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.116 22:48:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:22.116 22:48:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:22.116 22:48:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:22.116 22:48:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:22.116 22:48:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:22.116 22:48:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.116 22:48:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:22.116 22:48:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:22.116 22:48:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:22.116 22:48:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:22.116 22:48:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:22.116 22:48:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:22.116 22:48:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:22.116 22:48:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:22.116 22:48:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:22.116 22:48:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:22.116 22:48:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:22.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:21:22.116 00:21:22.116 --- 10.0.0.2 ping statistics --- 00:21:22.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.116 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:21:22.116 22:48:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.444 ms 00:21:22.116 00:21:22.116 --- 10.0.0.1 ping statistics --- 00:21:22.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.116 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:21:22.116 22:48:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.116 22:48:06 -- nvmf/common.sh@410 -- # return 0 00:21:22.116 22:48:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:22.116 22:48:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.116 22:48:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:22.116 22:48:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:22.116 22:48:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.116 22:48:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:22.116 22:48:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:22.116 22:48:06 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:22.116 22:48:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:22.116 22:48:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:22.116 22:48:06 -- common/autotest_common.sh@10 -- # set +x 00:21:22.116 22:48:06 -- nvmf/common.sh@469 -- # nvmfpid=1161030 00:21:22.116 22:48:06 -- nvmf/common.sh@470 -- # waitforlisten 1161030 00:21:22.116 22:48:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:22.116 22:48:06 -- common/autotest_common.sh@819 -- # '[' -z 1161030 ']' 00:21:22.116 22:48:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.116 22:48:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:22.116 22:48:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.116 22:48:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:22.116 22:48:06 -- common/autotest_common.sh@10 -- # set +x 00:21:22.116 [2024-04-15 22:48:06.753241] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:21:22.116 [2024-04-15 22:48:06.753311] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.116 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.116 [2024-04-15 22:48:06.831036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.116 [2024-04-15 22:48:06.899499] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:22.116 [2024-04-15 22:48:06.899626] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.116 [2024-04-15 22:48:06.899634] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.116 [2024-04-15 22:48:06.899641] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.116 [2024-04-15 22:48:06.899658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.690 22:48:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:22.690 22:48:07 -- common/autotest_common.sh@852 -- # return 0 00:21:22.690 22:48:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:22.690 22:48:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:22.690 22:48:07 -- common/autotest_common.sh@10 -- # set +x 00:21:22.951 22:48:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.951 22:48:07 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:22.952 22:48:07 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:22.952 22:48:07 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:22.952 22:48:07 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:22.952 22:48:07 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:22.952 22:48:07 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:22.952 22:48:07 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:22.952 22:48:07 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:22.952 [2024-04-15 22:48:07.666355] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.952 [2024-04-15 22:48:07.682361] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.952 [2024-04-15 22:48:07.682533] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.952 malloc0 00:21:22.952 22:48:07 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:22.952 22:48:07 -- fips/fips.sh@148 -- # bdevperf_pid=1161620 00:21:22.952 22:48:07 -- fips/fips.sh@149 -- # waitforlisten 1161620 /var/tmp/bdevperf.sock 00:21:22.952 22:48:07 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:22.952 22:48:07 -- common/autotest_common.sh@819 -- # '[' -z 1161620 ']' 00:21:22.952 22:48:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.952 22:48:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:22.952 22:48:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.952 22:48:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:22.952 22:48:07 -- common/autotest_common.sh@10 -- # set +x 00:21:23.213 [2024-04-15 22:48:07.801303] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:21:23.213 [2024-04-15 22:48:07.801354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161620 ] 00:21:23.213 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.213 [2024-04-15 22:48:07.855733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.213 [2024-04-15 22:48:07.907189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.787 22:48:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:23.787 22:48:08 -- common/autotest_common.sh@852 -- # return 0 00:21:23.787 22:48:08 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:24.048 [2024-04-15 22:48:08.668841] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.048 TLSTESTn1 00:21:24.048 22:48:08 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:24.048 Running I/O for 10 seconds... 00:21:36.288 00:21:36.288 Latency(us) 00:21:36.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.288 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:36.288 Verification LBA range: start 0x0 length 0x2000 00:21:36.288 TLSTESTn1 : 10.02 3299.59 12.89 0.00 0.00 38758.50 5898.24 59856.21 00:21:36.288 =================================================================================================================== 00:21:36.288 Total : 3299.59 12.89 0.00 0.00 38758.50 5898.24 59856.21 00:21:36.288 0 00:21:36.288 22:48:18 -- fips/fips.sh@1 -- # cleanup 00:21:36.288 22:48:18 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:36.288 22:48:18 -- common/autotest_common.sh@796 -- # type=--id 00:21:36.288 22:48:18 -- common/autotest_common.sh@797 -- # id=0 00:21:36.288 22:48:18 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:36.288 22:48:18 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:36.288 22:48:18 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:36.288 22:48:18 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:36.288 22:48:18 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:36.288 22:48:18 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:36.288 nvmf_trace.0 00:21:36.288 22:48:18 -- common/autotest_common.sh@811 -- # return 0 00:21:36.288 22:48:18 -- fips/fips.sh@16 -- # killprocess 1161620 00:21:36.288 22:48:18 -- common/autotest_common.sh@926 -- # '[' -z 1161620 ']' 00:21:36.288 22:48:18 -- common/autotest_common.sh@930 -- # kill -0 1161620 00:21:36.288 22:48:18 -- common/autotest_common.sh@931 -- # uname 00:21:36.288 22:48:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:36.288 22:48:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1161620 00:21:36.288 22:48:19 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:36.288 22:48:19 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:36.288 22:48:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1161620' 00:21:36.288 killing process with pid 1161620 00:21:36.288 22:48:19 -- common/autotest_common.sh@945 -- # kill 1161620 00:21:36.288 Received shutdown signal, test time was about 10.000000 seconds 00:21:36.288 00:21:36.288 Latency(us) 00:21:36.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.288 =================================================================================================================== 00:21:36.288 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:36.288 22:48:19 -- common/autotest_common.sh@950 -- # wait 1161620 00:21:36.288 22:48:19 -- fips/fips.sh@17 -- # nvmftestfini 00:21:36.288 22:48:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:36.288 22:48:19 -- nvmf/common.sh@116 -- # sync 00:21:36.288 22:48:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:36.288 22:48:19 -- nvmf/common.sh@119 -- # set +e 00:21:36.288 22:48:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:36.288 22:48:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:36.288 rmmod nvme_tcp 00:21:36.288 rmmod nvme_fabrics 00:21:36.288 rmmod nvme_keyring 00:21:36.288 22:48:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:36.288 22:48:19 -- nvmf/common.sh@123 -- # set -e 00:21:36.288 22:48:19 -- nvmf/common.sh@124 -- # return 0 00:21:36.288 22:48:19 -- nvmf/common.sh@477 -- # '[' -n 1161030 ']' 00:21:36.288 22:48:19 -- nvmf/common.sh@478 -- # killprocess 1161030 00:21:36.288 22:48:19 -- common/autotest_common.sh@926 -- # '[' -z 1161030 ']' 00:21:36.288 22:48:19 -- common/autotest_common.sh@930 -- # kill -0 1161030 00:21:36.288 22:48:19 -- common/autotest_common.sh@931 -- # uname 00:21:36.288 22:48:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:36.288 22:48:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1161030 00:21:36.288 22:48:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:36.288 22:48:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:36.288 22:48:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1161030' 00:21:36.288 killing process with pid 1161030 00:21:36.288 22:48:19 -- common/autotest_common.sh@945 -- # kill 1161030 00:21:36.288 22:48:19 -- common/autotest_common.sh@950 -- # wait 1161030 00:21:36.288 22:48:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:36.288 22:48:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:36.288 22:48:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:36.288 22:48:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:36.288 22:48:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:36.288 22:48:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.288 22:48:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.288 22:48:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.861 22:48:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:36.861 22:48:21 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:36.861 00:21:36.861 real 0m23.144s 00:21:36.861 user 0m22.628s 00:21:36.861 sys 0m10.984s 00:21:36.861 22:48:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:36.861 22:48:21 -- common/autotest_common.sh@10 -- # set +x 00:21:36.862 ************************************ 00:21:36.862 END TEST nvmf_fips 00:21:36.862 ************************************ 00:21:36.862 22:48:21 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:21:36.862 22:48:21 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:36.862 22:48:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:36.862 22:48:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:36.862 22:48:21 -- common/autotest_common.sh@10 -- # set +x 00:21:36.862 ************************************ 00:21:36.862 START TEST nvmf_fuzz 00:21:36.862 ************************************ 00:21:36.862 22:48:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:36.862 * Looking for test storage... 00:21:36.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:36.862 22:48:21 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:36.862 22:48:21 -- nvmf/common.sh@7 -- # uname -s 00:21:36.862 22:48:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.862 22:48:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.862 22:48:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.862 22:48:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.862 22:48:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.862 22:48:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.862 22:48:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.862 22:48:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.862 22:48:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.862 22:48:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.862 22:48:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:36.862 22:48:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:36.862 22:48:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.862 22:48:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.862 22:48:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:36.862 22:48:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:36.862 22:48:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.862 22:48:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.862 22:48:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.862 22:48:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.862 22:48:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.862 22:48:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.862 22:48:21 -- paths/export.sh@5 -- # export PATH 00:21:36.862 22:48:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.862 22:48:21 -- nvmf/common.sh@46 -- # : 0 00:21:36.862 22:48:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:36.862 22:48:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:36.862 22:48:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:36.862 22:48:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.862 22:48:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.862 22:48:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:36.862 22:48:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:36.862 22:48:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:36.862 22:48:21 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:36.862 22:48:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:36.862 22:48:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.862 22:48:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:36.862 22:48:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:36.862 22:48:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:36.862 22:48:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.862 22:48:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.862 22:48:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.862 22:48:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:36.862 22:48:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:36.862 22:48:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:36.862 22:48:21 -- common/autotest_common.sh@10 -- # set +x 00:21:45.014 22:48:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:45.014 22:48:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:45.014 22:48:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:45.014 22:48:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:45.014 22:48:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:45.014 22:48:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:45.014 22:48:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:45.014 22:48:29 -- nvmf/common.sh@294 -- # net_devs=() 00:21:45.014 22:48:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:45.014 22:48:29 -- nvmf/common.sh@295 -- # e810=() 00:21:45.014 22:48:29 -- nvmf/common.sh@295 -- # local -ga e810 00:21:45.014 22:48:29 -- nvmf/common.sh@296 -- # x722=() 00:21:45.014 22:48:29 -- nvmf/common.sh@296 -- # local -ga x722 00:21:45.014 22:48:29 -- nvmf/common.sh@297 -- # mlx=() 00:21:45.014 22:48:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:45.014 22:48:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.014 22:48:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.014 22:48:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.014 22:48:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.014 22:48:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.014 22:48:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.014 22:48:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.014 22:48:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.014 22:48:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.014 22:48:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.014 22:48:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.014 22:48:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:45.014 22:48:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:45.014 22:48:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:45.014 22:48:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:45.014 22:48:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:45.014 22:48:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:45.014 22:48:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:45.014 22:48:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:45.014 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:45.014 22:48:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:45.014 22:48:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:45.014 22:48:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.014 22:48:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.014 22:48:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:45.014 22:48:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:45.014 22:48:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:45.014 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:45.014 22:48:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:45.014 22:48:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:45.014 22:48:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.014 22:48:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.014 22:48:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:45.014 22:48:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:45.014 22:48:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:45.014 22:48:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:45.014 22:48:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:45.014 22:48:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.014 22:48:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:45.014 22:48:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.014 22:48:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:45.014 Found net devices under 0000:31:00.0: cvl_0_0 00:21:45.014 22:48:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.014 22:48:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:45.014 22:48:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.014 22:48:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:45.014 22:48:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.014 22:48:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:45.014 Found net devices under 0000:31:00.1: cvl_0_1 00:21:45.014 22:48:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.014 22:48:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:45.014 22:48:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:45.014 22:48:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:45.014 22:48:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:45.014 22:48:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:45.014 22:48:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.014 22:48:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.014 22:48:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.014 22:48:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:45.014 22:48:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.014 22:48:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.014 22:48:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:45.014 22:48:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.014 22:48:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.014 22:48:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:45.014 22:48:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:45.014 22:48:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.014 22:48:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.014 22:48:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.014 22:48:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.014 22:48:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:45.014 22:48:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.014 22:48:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.014 22:48:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:45.014 22:48:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:45.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:21:45.014 00:21:45.014 --- 10.0.0.2 ping statistics --- 00:21:45.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.014 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:21:45.014 22:48:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:45.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:21:45.014 00:21:45.014 --- 10.0.0.1 ping statistics --- 00:21:45.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.014 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:21:45.014 22:48:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.014 22:48:29 -- nvmf/common.sh@410 -- # return 0 00:21:45.014 22:48:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:45.014 22:48:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.014 22:48:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:45.014 22:48:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:45.014 22:48:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.014 22:48:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:45.014 22:48:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:45.014 22:48:29 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:45.014 22:48:29 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1168582 00:21:45.014 22:48:29 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:45.014 22:48:29 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1168582 00:21:45.015 22:48:29 -- common/autotest_common.sh@819 -- # '[' -z 1168582 ']' 00:21:45.015 22:48:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.015 22:48:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:45.015 22:48:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.015 22:48:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:45.015 22:48:29 -- common/autotest_common.sh@10 -- # set +x 00:21:45.588 22:48:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:45.588 22:48:30 -- common/autotest_common.sh@852 -- # return 0 00:21:45.588 22:48:30 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:45.588 22:48:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:45.588 22:48:30 -- common/autotest_common.sh@10 -- # set +x 00:21:45.588 22:48:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:45.588 22:48:30 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:45.588 22:48:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:45.588 22:48:30 -- common/autotest_common.sh@10 -- # set +x 00:21:45.588 Malloc0 00:21:45.588 22:48:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:45.588 22:48:30 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:45.588 22:48:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:45.588 22:48:30 -- common/autotest_common.sh@10 -- # set +x 00:21:45.588 22:48:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:45.588 22:48:30 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:45.588 22:48:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:45.588 22:48:30 -- common/autotest_common.sh@10 -- # set +x 00:21:45.588 22:48:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:45.588 22:48:30 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:45.588 22:48:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:45.588 22:48:30 -- common/autotest_common.sh@10 -- # set +x 00:21:45.588 22:48:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:45.588 22:48:30 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:45.588 22:48:30 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:22:17.766 Fuzzing completed. Shutting down the fuzz application 00:22:17.766 00:22:17.766 Dumping successful admin opcodes: 00:22:17.766 8, 9, 10, 24, 00:22:17.766 Dumping successful io opcodes: 00:22:17.766 0, 9, 00:22:17.766 NS: 0x200003aeff00 I/O qp, Total commands completed: 825034, total successful commands: 4784, random_seed: 4227207360 00:22:17.766 NS: 0x200003aeff00 admin qp, Total commands completed: 106087, total successful commands: 872, random_seed: 2720430912 00:22:17.766 22:49:00 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:22:17.766 Fuzzing completed. Shutting down the fuzz application 00:22:17.766 00:22:17.766 Dumping successful admin opcodes: 00:22:17.766 24, 00:22:17.766 Dumping successful io opcodes: 00:22:17.766 00:22:17.766 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2206350907 00:22:17.766 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2206463267 00:22:17.766 22:49:02 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:17.766 22:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.766 22:49:02 -- common/autotest_common.sh@10 -- # set +x 00:22:17.766 22:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.766 22:49:02 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:17.766 22:49:02 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:22:17.766 22:49:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:17.766 22:49:02 -- nvmf/common.sh@116 -- # sync 00:22:17.766 22:49:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:17.766 22:49:02 -- nvmf/common.sh@119 -- # set +e 00:22:17.766 22:49:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:17.766 22:49:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:17.766 rmmod nvme_tcp 00:22:17.766 rmmod nvme_fabrics 00:22:17.766 rmmod nvme_keyring 00:22:17.766 22:49:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:17.766 22:49:02 -- nvmf/common.sh@123 -- # set -e 00:22:17.766 22:49:02 -- nvmf/common.sh@124 -- # return 0 00:22:17.766 22:49:02 -- nvmf/common.sh@477 -- # '[' -n 1168582 ']' 00:22:17.766 22:49:02 -- nvmf/common.sh@478 -- # killprocess 1168582 00:22:17.766 22:49:02 -- common/autotest_common.sh@926 -- # '[' -z 1168582 ']' 00:22:17.766 22:49:02 -- common/autotest_common.sh@930 -- # kill -0 1168582 00:22:17.766 22:49:02 -- common/autotest_common.sh@931 -- # uname 00:22:17.766 22:49:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:17.766 22:49:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1168582 00:22:17.766 22:49:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:17.766 22:49:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:17.766 22:49:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1168582' 00:22:17.766 killing process with pid 1168582 00:22:17.766 22:49:02 -- common/autotest_common.sh@945 -- # kill 1168582 00:22:17.766 22:49:02 -- common/autotest_common.sh@950 -- # wait 1168582 00:22:17.766 22:49:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:17.766 22:49:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:17.766 22:49:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:17.766 22:49:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:17.766 22:49:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:17.766 22:49:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.766 22:49:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:17.766 22:49:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.679 22:49:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:19.679 22:49:04 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:22:19.679 00:22:19.679 real 0m42.910s 00:22:19.679 user 0m56.558s 00:22:19.679 sys 0m15.693s 00:22:19.679 22:49:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:19.679 22:49:04 -- common/autotest_common.sh@10 -- # set +x 00:22:19.679 ************************************ 00:22:19.679 END TEST nvmf_fuzz 00:22:19.679 ************************************ 00:22:19.940 22:49:04 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:19.940 22:49:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:19.940 22:49:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:19.940 22:49:04 -- common/autotest_common.sh@10 -- # set +x 00:22:19.940 ************************************ 00:22:19.940 START TEST nvmf_multiconnection 00:22:19.940 ************************************ 00:22:19.940 22:49:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:19.940 * Looking for test storage... 00:22:19.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:19.940 22:49:04 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:19.940 22:49:04 -- nvmf/common.sh@7 -- # uname -s 00:22:19.940 22:49:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.940 22:49:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.940 22:49:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.940 22:49:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.940 22:49:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.940 22:49:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.940 22:49:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.940 22:49:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.940 22:49:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.940 22:49:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.940 22:49:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:19.940 22:49:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:19.940 22:49:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.940 22:49:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.940 22:49:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:19.940 22:49:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:19.940 22:49:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.940 22:49:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.940 22:49:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.940 22:49:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.940 22:49:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.940 22:49:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.940 22:49:04 -- paths/export.sh@5 -- # export PATH 00:22:19.940 22:49:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.940 22:49:04 -- nvmf/common.sh@46 -- # : 0 00:22:19.941 22:49:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:19.941 22:49:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:19.941 22:49:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:19.941 22:49:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.941 22:49:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.941 22:49:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:19.941 22:49:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:19.941 22:49:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:19.941 22:49:04 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:19.941 22:49:04 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:19.941 22:49:04 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:22:19.941 22:49:04 -- target/multiconnection.sh@16 -- # nvmftestinit 00:22:19.941 22:49:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:19.941 22:49:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.941 22:49:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:19.941 22:49:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:19.941 22:49:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:19.941 22:49:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.941 22:49:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:19.941 22:49:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.941 22:49:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:19.941 22:49:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:19.941 22:49:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:19.941 22:49:04 -- common/autotest_common.sh@10 -- # set +x 00:22:28.085 22:49:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:28.085 22:49:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:28.085 22:49:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:28.085 22:49:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:28.085 22:49:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:28.085 22:49:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:28.085 22:49:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:28.085 22:49:12 -- nvmf/common.sh@294 -- # net_devs=() 00:22:28.085 22:49:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:28.085 22:49:12 -- nvmf/common.sh@295 -- # e810=() 00:22:28.085 22:49:12 -- nvmf/common.sh@295 -- # local -ga e810 00:22:28.085 22:49:12 -- nvmf/common.sh@296 -- # x722=() 00:22:28.085 22:49:12 -- nvmf/common.sh@296 -- # local -ga x722 00:22:28.085 22:49:12 -- nvmf/common.sh@297 -- # mlx=() 00:22:28.085 22:49:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:28.085 22:49:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.085 22:49:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.085 22:49:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.085 22:49:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.085 22:49:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.085 22:49:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.085 22:49:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.085 22:49:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.085 22:49:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.085 22:49:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.085 22:49:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.085 22:49:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:28.085 22:49:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:28.085 22:49:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:28.085 22:49:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:28.085 22:49:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:28.085 22:49:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:28.085 22:49:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:28.085 22:49:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:28.085 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:28.085 22:49:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:28.085 22:49:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:28.086 22:49:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.086 22:49:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.086 22:49:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:28.086 22:49:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:28.086 22:49:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:28.086 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:28.086 22:49:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:28.086 22:49:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:28.086 22:49:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.086 22:49:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.086 22:49:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:28.086 22:49:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:28.086 22:49:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:28.086 22:49:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:28.086 22:49:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:28.086 22:49:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.086 22:49:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:28.086 22:49:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.086 22:49:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:28.086 Found net devices under 0000:31:00.0: cvl_0_0 00:22:28.086 22:49:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.086 22:49:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:28.086 22:49:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.086 22:49:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:28.086 22:49:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.086 22:49:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:28.086 Found net devices under 0000:31:00.1: cvl_0_1 00:22:28.086 22:49:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.086 22:49:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:28.086 22:49:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:28.086 22:49:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:28.086 22:49:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:28.086 22:49:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:28.086 22:49:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.086 22:49:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.086 22:49:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.086 22:49:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:28.086 22:49:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.086 22:49:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.086 22:49:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:28.086 22:49:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.086 22:49:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.086 22:49:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:28.086 22:49:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:28.086 22:49:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.086 22:49:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.086 22:49:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.086 22:49:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.086 22:49:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:28.086 22:49:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.086 22:49:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.086 22:49:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.086 22:49:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:28.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:22:28.086 00:22:28.086 --- 10.0.0.2 ping statistics --- 00:22:28.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.086 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:22:28.086 22:49:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:22:28.086 00:22:28.086 --- 10.0.0.1 ping statistics --- 00:22:28.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.086 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:22:28.086 22:49:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.086 22:49:12 -- nvmf/common.sh@410 -- # return 0 00:22:28.086 22:49:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:28.086 22:49:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.086 22:49:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:28.086 22:49:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:28.086 22:49:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.086 22:49:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:28.086 22:49:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:28.086 22:49:12 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:28.086 22:49:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:28.086 22:49:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:28.086 22:49:12 -- common/autotest_common.sh@10 -- # set +x 00:22:28.086 22:49:12 -- nvmf/common.sh@469 -- # nvmfpid=1179682 00:22:28.086 22:49:12 -- nvmf/common.sh@470 -- # waitforlisten 1179682 00:22:28.086 22:49:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:28.086 22:49:12 -- common/autotest_common.sh@819 -- # '[' -z 1179682 ']' 00:22:28.086 22:49:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.086 22:49:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:28.086 22:49:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.086 22:49:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:28.086 22:49:12 -- common/autotest_common.sh@10 -- # set +x 00:22:28.086 [2024-04-15 22:49:12.577340] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:22:28.086 [2024-04-15 22:49:12.577403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.086 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.086 [2024-04-15 22:49:12.656083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:28.086 [2024-04-15 22:49:12.729499] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:28.086 [2024-04-15 22:49:12.729639] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.086 [2024-04-15 22:49:12.729650] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.086 [2024-04-15 22:49:12.729658] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.086 [2024-04-15 22:49:12.729772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.086 [2024-04-15 22:49:12.729878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.086 [2024-04-15 22:49:12.730012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.086 [2024-04-15 22:49:12.730014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:28.658 22:49:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:28.658 22:49:13 -- common/autotest_common.sh@852 -- # return 0 00:22:28.658 22:49:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:28.658 22:49:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:28.658 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.658 22:49:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.658 22:49:13 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:28.658 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.658 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.658 [2024-04-15 22:49:13.398715] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.658 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.658 22:49:13 -- target/multiconnection.sh@21 -- # seq 1 11 00:22:28.658 22:49:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:28.658 22:49:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:28.658 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.658 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.658 Malloc1 00:22:28.658 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.658 22:49:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:28.658 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.658 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.658 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.658 22:49:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:28.658 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.658 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.658 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.658 22:49:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.658 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.658 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.658 [2024-04-15 22:49:13.462091] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.659 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.925 22:49:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:28.925 22:49:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:28.925 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.925 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.925 Malloc2 00:22:28.925 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.925 22:49:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:28.925 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.925 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.925 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.925 22:49:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:28.925 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.925 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.925 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.925 22:49:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:28.925 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.925 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.925 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.925 22:49:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:28.925 22:49:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:28.925 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.925 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.925 Malloc3 00:22:28.925 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.925 22:49:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:28.925 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.925 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.925 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.925 22:49:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:28.925 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.925 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.925 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.925 22:49:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:22:28.925 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.925 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.925 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.925 22:49:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:28.925 22:49:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:28.925 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.925 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.925 Malloc4 00:22:28.925 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.925 22:49:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:28.925 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.925 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.925 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.925 22:49:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:28.925 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.925 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.925 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.925 22:49:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:22:28.925 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.925 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.925 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.925 22:49:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:28.925 22:49:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:28.925 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.925 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.925 Malloc5 00:22:28.925 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.925 22:49:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:28.925 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.925 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.925 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.925 22:49:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:28.925 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.925 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.926 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.926 22:49:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:22:28.926 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.926 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.926 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.926 22:49:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:28.926 22:49:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:28.926 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.926 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.926 Malloc6 00:22:28.926 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.926 22:49:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:28.926 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.926 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.926 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.926 22:49:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:28.926 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.926 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.926 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.926 22:49:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:22:28.926 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.926 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:28.926 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.926 22:49:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:28.926 22:49:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:28.926 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.926 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.187 Malloc7 00:22:29.187 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.187 22:49:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:29.187 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.187 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.187 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.187 22:49:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:29.187 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.187 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.187 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.187 22:49:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:22:29.187 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.187 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.187 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.187 22:49:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.187 22:49:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:29.187 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.187 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.187 Malloc8 00:22:29.187 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.187 22:49:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:29.187 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.187 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.187 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.187 22:49:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:29.187 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.187 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.187 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.187 22:49:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:22:29.187 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.187 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.187 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.187 22:49:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.187 22:49:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:29.187 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.187 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.187 Malloc9 00:22:29.187 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.187 22:49:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:29.187 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.187 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.187 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.187 22:49:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:29.187 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.187 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.187 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.187 22:49:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:22:29.187 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.187 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.187 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.187 22:49:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.187 22:49:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:29.187 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.187 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.187 Malloc10 00:22:29.187 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.187 22:49:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:29.187 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.187 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.187 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.187 22:49:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:29.187 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.187 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.187 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.187 22:49:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:22:29.187 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.187 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.187 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.187 22:49:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.187 22:49:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:29.187 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.187 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.187 Malloc11 00:22:29.187 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.187 22:49:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:29.187 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.187 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.188 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.188 22:49:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:29.188 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.188 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.188 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.188 22:49:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:22:29.188 22:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.188 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:22:29.449 22:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.449 22:49:13 -- target/multiconnection.sh@28 -- # seq 1 11 00:22:29.449 22:49:14 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.449 22:49:14 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:30.838 22:49:15 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:30.838 22:49:15 -- common/autotest_common.sh@1177 -- # local i=0 00:22:30.838 22:49:15 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:30.838 22:49:15 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:30.838 22:49:15 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:32.753 22:49:17 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:32.753 22:49:17 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:32.753 22:49:17 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:22:32.753 22:49:17 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:32.753 22:49:17 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:32.753 22:49:17 -- common/autotest_common.sh@1187 -- # return 0 00:22:32.753 22:49:17 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.753 22:49:17 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:22:34.666 22:49:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:34.666 22:49:18 -- common/autotest_common.sh@1177 -- # local i=0 00:22:34.666 22:49:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:34.666 22:49:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:34.666 22:49:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:36.578 22:49:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:36.578 22:49:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:36.578 22:49:21 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:22:36.578 22:49:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:36.578 22:49:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:36.578 22:49:21 -- common/autotest_common.sh@1187 -- # return 0 00:22:36.578 22:49:21 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:36.578 22:49:21 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:22:37.966 22:49:22 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:37.966 22:49:22 -- common/autotest_common.sh@1177 -- # local i=0 00:22:37.966 22:49:22 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:37.966 22:49:22 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:37.966 22:49:22 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:40.521 22:49:24 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:40.521 22:49:24 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:40.521 22:49:24 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:22:40.521 22:49:24 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:40.521 22:49:24 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:40.521 22:49:24 -- common/autotest_common.sh@1187 -- # return 0 00:22:40.521 22:49:24 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:40.521 22:49:24 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:22:41.505 22:49:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:41.505 22:49:26 -- common/autotest_common.sh@1177 -- # local i=0 00:22:41.505 22:49:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:41.505 22:49:26 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:41.505 22:49:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:43.429 22:49:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:43.720 22:49:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:43.720 22:49:28 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:22:43.720 22:49:28 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:43.720 22:49:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:43.720 22:49:28 -- common/autotest_common.sh@1187 -- # return 0 00:22:43.720 22:49:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.720 22:49:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:45.645 22:49:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:45.645 22:49:29 -- common/autotest_common.sh@1177 -- # local i=0 00:22:45.645 22:49:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:45.645 22:49:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:45.645 22:49:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:47.562 22:49:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:47.562 22:49:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:47.562 22:49:31 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:22:47.562 22:49:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:47.562 22:49:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:47.562 22:49:32 -- common/autotest_common.sh@1187 -- # return 0 00:22:47.562 22:49:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:47.562 22:49:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:48.951 22:49:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:48.951 22:49:33 -- common/autotest_common.sh@1177 -- # local i=0 00:22:48.951 22:49:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:48.951 22:49:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:48.951 22:49:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:51.500 22:49:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:51.500 22:49:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:51.500 22:49:35 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:22:51.500 22:49:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:51.500 22:49:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:51.500 22:49:35 -- common/autotest_common.sh@1187 -- # return 0 00:22:51.500 22:49:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.500 22:49:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:52.885 22:49:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:52.885 22:49:37 -- common/autotest_common.sh@1177 -- # local i=0 00:22:52.885 22:49:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:52.885 22:49:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:52.885 22:49:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:54.798 22:49:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:54.798 22:49:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:54.798 22:49:39 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:22:54.798 22:49:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:54.798 22:49:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:54.798 22:49:39 -- common/autotest_common.sh@1187 -- # return 0 00:22:54.798 22:49:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.798 22:49:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:56.711 22:49:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:56.711 22:49:41 -- common/autotest_common.sh@1177 -- # local i=0 00:22:56.711 22:49:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:56.711 22:49:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:56.711 22:49:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:58.622 22:49:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:58.622 22:49:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:58.622 22:49:43 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:22:58.622 22:49:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:58.622 22:49:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:58.622 22:49:43 -- common/autotest_common.sh@1187 -- # return 0 00:22:58.622 22:49:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:58.622 22:49:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:23:00.535 22:49:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:00.535 22:49:44 -- common/autotest_common.sh@1177 -- # local i=0 00:23:00.535 22:49:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:00.535 22:49:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:00.535 22:49:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:02.450 22:49:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:02.450 22:49:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:02.450 22:49:46 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:23:02.450 22:49:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:02.450 22:49:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:02.450 22:49:46 -- common/autotest_common.sh@1187 -- # return 0 00:23:02.450 22:49:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:02.450 22:49:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:23:04.365 22:49:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:23:04.365 22:49:48 -- common/autotest_common.sh@1177 -- # local i=0 00:23:04.365 22:49:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:04.365 22:49:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:04.365 22:49:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:06.344 22:49:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:06.344 22:49:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:06.344 22:49:50 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:23:06.344 22:49:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:06.344 22:49:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:06.344 22:49:50 -- common/autotest_common.sh@1187 -- # return 0 00:23:06.344 22:49:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:06.344 22:49:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:23:08.258 22:49:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:23:08.258 22:49:52 -- common/autotest_common.sh@1177 -- # local i=0 00:23:08.258 22:49:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:08.258 22:49:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:08.258 22:49:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:10.171 22:49:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:10.171 22:49:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:10.171 22:49:54 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:23:10.171 22:49:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:10.171 22:49:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:10.171 22:49:54 -- common/autotest_common.sh@1187 -- # return 0 00:23:10.171 22:49:54 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:23:10.171 [global] 00:23:10.171 thread=1 00:23:10.171 invalidate=1 00:23:10.171 rw=read 00:23:10.171 time_based=1 00:23:10.171 runtime=10 00:23:10.171 ioengine=libaio 00:23:10.171 direct=1 00:23:10.171 bs=262144 00:23:10.171 iodepth=64 00:23:10.171 norandommap=1 00:23:10.171 numjobs=1 00:23:10.171 00:23:10.171 [job0] 00:23:10.171 filename=/dev/nvme0n1 00:23:10.171 [job1] 00:23:10.171 filename=/dev/nvme10n1 00:23:10.171 [job2] 00:23:10.171 filename=/dev/nvme1n1 00:23:10.171 [job3] 00:23:10.171 filename=/dev/nvme2n1 00:23:10.171 [job4] 00:23:10.171 filename=/dev/nvme3n1 00:23:10.171 [job5] 00:23:10.171 filename=/dev/nvme4n1 00:23:10.171 [job6] 00:23:10.171 filename=/dev/nvme5n1 00:23:10.171 [job7] 00:23:10.171 filename=/dev/nvme6n1 00:23:10.171 [job8] 00:23:10.171 filename=/dev/nvme7n1 00:23:10.171 [job9] 00:23:10.171 filename=/dev/nvme8n1 00:23:10.171 [job10] 00:23:10.171 filename=/dev/nvme9n1 00:23:10.171 Could not set queue depth (nvme0n1) 00:23:10.171 Could not set queue depth (nvme10n1) 00:23:10.171 Could not set queue depth (nvme1n1) 00:23:10.171 Could not set queue depth (nvme2n1) 00:23:10.171 Could not set queue depth (nvme3n1) 00:23:10.171 Could not set queue depth (nvme4n1) 00:23:10.171 Could not set queue depth (nvme5n1) 00:23:10.171 Could not set queue depth (nvme6n1) 00:23:10.171 Could not set queue depth (nvme7n1) 00:23:10.171 Could not set queue depth (nvme8n1) 00:23:10.171 Could not set queue depth (nvme9n1) 00:23:10.430 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.430 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.430 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.430 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.430 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.430 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.430 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.430 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.430 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.430 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.430 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.430 fio-3.35 00:23:10.430 Starting 11 threads 00:23:22.668 00:23:22.668 job0: (groupid=0, jobs=1): err= 0: pid=1188364: Mon Apr 15 22:50:05 2024 00:23:22.668 read: IOPS=879, BW=220MiB/s (230MB/s)(2209MiB/10050msec) 00:23:22.668 slat (usec): min=6, max=74280, avg=990.96, stdev=3103.18 00:23:22.668 clat (msec): min=2, max=184, avg=71.72, stdev=29.64 00:23:22.668 lat (msec): min=2, max=184, avg=72.71, stdev=30.08 00:23:22.668 clat percentiles (msec): 00:23:22.668 | 1.00th=[ 8], 5.00th=[ 20], 10.00th=[ 31], 20.00th=[ 45], 00:23:22.668 | 30.00th=[ 55], 40.00th=[ 68], 50.00th=[ 74], 60.00th=[ 81], 00:23:22.668 | 70.00th=[ 89], 80.00th=[ 99], 90.00th=[ 109], 95.00th=[ 118], 00:23:22.668 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 146], 00:23:22.668 | 99.99th=[ 184] 00:23:22.668 bw ( KiB/s): min=135680, max=369664, per=9.98%, avg=224563.20, stdev=63003.80, samples=20 00:23:22.668 iops : min= 530, max= 1444, avg=877.20, stdev=246.11, samples=20 00:23:22.668 lat (msec) : 4=0.08%, 10=1.68%, 20=3.36%, 50=21.13%, 100=55.33% 00:23:22.668 lat (msec) : 250=18.43% 00:23:22.668 cpu : usr=0.38%, sys=2.66%, ctx=2047, majf=0, minf=4097 00:23:22.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:22.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:22.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:22.669 issued rwts: total=8835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:22.669 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:22.669 job1: (groupid=0, jobs=1): err= 0: pid=1188383: Mon Apr 15 22:50:05 2024 00:23:22.669 read: IOPS=848, BW=212MiB/s (222MB/s)(2143MiB/10102msec) 00:23:22.669 slat (usec): min=6, max=113945, avg=1030.55, stdev=3751.53 00:23:22.669 clat (msec): min=2, max=238, avg=74.31, stdev=34.73 00:23:22.669 lat (msec): min=2, max=266, avg=75.34, stdev=35.26 00:23:22.669 clat percentiles (msec): 00:23:22.669 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 24], 20.00th=[ 37], 00:23:22.669 | 30.00th=[ 60], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 86], 00:23:22.669 | 70.00th=[ 92], 80.00th=[ 99], 90.00th=[ 111], 95.00th=[ 136], 00:23:22.669 | 99.00th=[ 161], 99.50th=[ 163], 99.90th=[ 178], 99.95th=[ 178], 00:23:22.669 | 99.99th=[ 239] 00:23:22.669 bw ( KiB/s): min=131072, max=371712, per=9.68%, avg=217734.20, stdev=70024.89, samples=20 00:23:22.669 iops : min= 512, max= 1452, avg=850.50, stdev=273.54, samples=20 00:23:22.669 lat (msec) : 4=0.12%, 10=1.93%, 20=5.73%, 50=18.19%, 100=56.22% 00:23:22.669 lat (msec) : 250=17.82% 00:23:22.669 cpu : usr=0.29%, sys=2.95%, ctx=1971, majf=0, minf=2971 00:23:22.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:22.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:22.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:22.669 issued rwts: total=8570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:22.669 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:22.669 job2: (groupid=0, jobs=1): err= 0: pid=1188392: Mon Apr 15 22:50:05 2024 00:23:22.669 read: IOPS=792, BW=198MiB/s (208MB/s)(2001MiB/10096msec) 00:23:22.669 slat (usec): min=5, max=81358, avg=1130.05, stdev=3236.43 00:23:22.669 clat (msec): min=2, max=187, avg=79.48, stdev=26.12 00:23:22.669 lat (msec): min=3, max=187, avg=80.61, stdev=26.48 00:23:22.669 clat percentiles (msec): 00:23:22.669 | 1.00th=[ 13], 5.00th=[ 39], 10.00th=[ 49], 20.00th=[ 59], 00:23:22.669 | 30.00th=[ 68], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 85], 00:23:22.669 | 70.00th=[ 90], 80.00th=[ 97], 90.00th=[ 109], 95.00th=[ 127], 00:23:22.669 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 182], 99.95th=[ 186], 00:23:22.669 | 99.99th=[ 188] 00:23:22.669 bw ( KiB/s): min=123904, max=287232, per=9.04%, avg=203318.20, stdev=41654.70, samples=20 00:23:22.669 iops : min= 484, max= 1122, avg=794.20, stdev=162.69, samples=20 00:23:22.669 lat (msec) : 4=0.01%, 10=0.47%, 20=1.22%, 50=9.76%, 100=71.97% 00:23:22.669 lat (msec) : 250=16.56% 00:23:22.669 cpu : usr=0.27%, sys=2.35%, ctx=1821, majf=0, minf=4097 00:23:22.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:22.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:22.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:22.669 issued rwts: total=8005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:22.669 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:22.669 job3: (groupid=0, jobs=1): err= 0: pid=1188402: Mon Apr 15 22:50:05 2024 00:23:22.669 read: IOPS=1093, BW=273MiB/s (287MB/s)(2739MiB/10016msec) 00:23:22.669 slat (usec): min=5, max=89199, avg=800.34, stdev=2809.95 00:23:22.669 clat (usec): min=1997, max=183495, avg=57624.07, stdev=27708.47 00:23:22.669 lat (msec): min=2, max=203, avg=58.42, stdev=28.13 00:23:22.669 clat percentiles (msec): 00:23:22.669 | 1.00th=[ 8], 5.00th=[ 19], 10.00th=[ 25], 20.00th=[ 33], 00:23:22.669 | 30.00th=[ 40], 40.00th=[ 46], 50.00th=[ 52], 60.00th=[ 66], 00:23:22.669 | 70.00th=[ 73], 80.00th=[ 81], 90.00th=[ 99], 95.00th=[ 109], 00:23:22.669 | 99.00th=[ 124], 99.50th=[ 129], 99.90th=[ 138], 99.95th=[ 144], 00:23:22.669 | 99.99th=[ 161] 00:23:22.669 bw ( KiB/s): min=164352, max=453632, per=12.40%, avg=278886.40, stdev=88686.15, samples=20 00:23:22.669 iops : min= 642, max= 1772, avg=1089.40, stdev=346.43, samples=20 00:23:22.669 lat (msec) : 2=0.01%, 4=0.29%, 10=1.31%, 20=4.05%, 50=42.41% 00:23:22.669 lat (msec) : 100=43.39%, 250=8.53% 00:23:22.669 cpu : usr=0.35%, sys=3.14%, ctx=2447, majf=0, minf=4097 00:23:22.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:23:22.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:22.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:22.669 issued rwts: total=10957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:22.669 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:22.669 job4: (groupid=0, jobs=1): err= 0: pid=1188408: Mon Apr 15 22:50:05 2024 00:23:22.669 read: IOPS=740, BW=185MiB/s (194MB/s)(1862MiB/10055msec) 00:23:22.669 slat (usec): min=6, max=36056, avg=1113.00, stdev=3116.30 00:23:22.669 clat (msec): min=5, max=152, avg=85.18, stdev=24.05 00:23:22.669 lat (msec): min=8, max=176, avg=86.29, stdev=24.37 00:23:22.669 clat percentiles (msec): 00:23:22.669 | 1.00th=[ 26], 5.00th=[ 44], 10.00th=[ 51], 20.00th=[ 65], 00:23:22.669 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 88], 60.00th=[ 95], 00:23:22.669 | 70.00th=[ 101], 80.00th=[ 106], 90.00th=[ 115], 95.00th=[ 123], 00:23:22.669 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 142], 99.95th=[ 144], 00:23:22.669 | 99.99th=[ 153] 00:23:22.669 bw ( KiB/s): min=138240, max=328192, per=8.40%, avg=189066.25, stdev=48005.38, samples=20 00:23:22.669 iops : min= 540, max= 1282, avg=738.50, stdev=187.55, samples=20 00:23:22.669 lat (msec) : 10=0.07%, 20=0.55%, 50=8.75%, 100=60.73%, 250=29.90% 00:23:22.669 cpu : usr=0.38%, sys=2.19%, ctx=1876, majf=0, minf=4097 00:23:22.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:22.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:22.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:22.669 issued rwts: total=7449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:22.669 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:22.669 job5: (groupid=0, jobs=1): err= 0: pid=1188432: Mon Apr 15 22:50:05 2024 00:23:22.669 read: IOPS=893, BW=223MiB/s (234MB/s)(2254MiB/10089msec) 00:23:22.669 slat (usec): min=5, max=103176, avg=990.16, stdev=3401.97 00:23:22.669 clat (usec): min=1848, max=208984, avg=70532.95, stdev=36120.98 00:23:22.669 lat (usec): min=1895, max=217077, avg=71523.12, stdev=36667.01 00:23:22.669 clat percentiles (msec): 00:23:22.669 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 20], 20.00th=[ 32], 00:23:22.669 | 30.00th=[ 48], 40.00th=[ 62], 50.00th=[ 75], 60.00th=[ 86], 00:23:22.669 | 70.00th=[ 95], 80.00th=[ 105], 90.00th=[ 113], 95.00th=[ 123], 00:23:22.669 | 99.00th=[ 163], 99.50th=[ 182], 99.90th=[ 188], 99.95th=[ 194], 00:23:22.669 | 99.99th=[ 209] 00:23:22.669 bw ( KiB/s): min=137216, max=389120, per=10.18%, avg=229108.75, stdev=81325.65, samples=20 00:23:22.669 iops : min= 536, max= 1520, avg=894.95, stdev=317.67, samples=20 00:23:22.669 lat (msec) : 2=0.04%, 4=0.57%, 10=3.12%, 20=6.97%, 50=20.59% 00:23:22.669 lat (msec) : 100=44.53%, 250=24.18% 00:23:22.669 cpu : usr=0.36%, sys=2.56%, ctx=2061, majf=0, minf=4097 00:23:22.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:22.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:22.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:22.669 issued rwts: total=9014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:22.669 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:22.669 job6: (groupid=0, jobs=1): err= 0: pid=1188444: Mon Apr 15 22:50:05 2024 00:23:22.669 read: IOPS=668, BW=167MiB/s (175MB/s)(1685MiB/10083msec) 00:23:22.669 slat (usec): min=8, max=39552, avg=1356.55, stdev=3465.91 00:23:22.669 clat (msec): min=10, max=186, avg=94.30, stdev=24.66 00:23:22.669 lat (msec): min=10, max=186, avg=95.66, stdev=25.13 00:23:22.669 clat percentiles (msec): 00:23:22.669 | 1.00th=[ 21], 5.00th=[ 55], 10.00th=[ 66], 20.00th=[ 77], 00:23:22.669 | 30.00th=[ 84], 40.00th=[ 90], 50.00th=[ 96], 60.00th=[ 101], 00:23:22.669 | 70.00th=[ 106], 80.00th=[ 112], 90.00th=[ 124], 95.00th=[ 136], 00:23:22.669 | 99.00th=[ 153], 99.50th=[ 163], 99.90th=[ 184], 99.95th=[ 184], 00:23:22.669 | 99.99th=[ 188] 00:23:22.669 bw ( KiB/s): min=120320, max=242176, per=7.60%, avg=170931.20, stdev=32576.82, samples=20 00:23:22.669 iops : min= 470, max= 946, avg=667.70, stdev=127.25, samples=20 00:23:22.669 lat (msec) : 20=0.95%, 50=2.94%, 100=54.53%, 250=41.59% 00:23:22.669 cpu : usr=0.22%, sys=2.20%, ctx=1619, majf=0, minf=4097 00:23:22.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:22.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:22.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:22.669 issued rwts: total=6740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:22.669 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:22.669 job7: (groupid=0, jobs=1): err= 0: pid=1188455: Mon Apr 15 22:50:05 2024 00:23:22.669 read: IOPS=712, BW=178MiB/s (187MB/s)(1798MiB/10089msec) 00:23:22.669 slat (usec): min=8, max=35229, avg=1295.62, stdev=3325.67 00:23:22.669 clat (msec): min=14, max=190, avg=88.43, stdev=22.53 00:23:22.669 lat (msec): min=14, max=190, avg=89.73, stdev=22.88 00:23:22.669 clat percentiles (msec): 00:23:22.669 | 1.00th=[ 39], 5.00th=[ 55], 10.00th=[ 63], 20.00th=[ 71], 00:23:22.669 | 30.00th=[ 77], 40.00th=[ 82], 50.00th=[ 87], 60.00th=[ 93], 00:23:22.669 | 70.00th=[ 99], 80.00th=[ 105], 90.00th=[ 120], 95.00th=[ 130], 00:23:22.669 | 99.00th=[ 148], 99.50th=[ 153], 99.90th=[ 163], 99.95th=[ 165], 00:23:22.669 | 99.99th=[ 190] 00:23:22.669 bw ( KiB/s): min=117248, max=244224, per=8.11%, avg=182451.20, stdev=33764.34, samples=20 00:23:22.669 iops : min= 458, max= 954, avg=712.70, stdev=131.89, samples=20 00:23:22.669 lat (msec) : 20=0.32%, 50=2.81%, 100=70.14%, 250=26.73% 00:23:22.669 cpu : usr=0.21%, sys=2.19%, ctx=1629, majf=0, minf=4097 00:23:22.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:23:22.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:22.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:22.669 issued rwts: total=7190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:22.669 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:22.669 job8: (groupid=0, jobs=1): err= 0: pid=1188489: Mon Apr 15 22:50:05 2024 00:23:22.669 read: IOPS=674, BW=169MiB/s (177MB/s)(1707MiB/10123msec) 00:23:22.669 slat (usec): min=7, max=52378, avg=1273.39, stdev=3594.65 00:23:22.669 clat (msec): min=25, max=221, avg=93.51, stdev=23.97 00:23:22.669 lat (msec): min=25, max=221, avg=94.78, stdev=24.33 00:23:22.669 clat percentiles (msec): 00:23:22.669 | 1.00th=[ 40], 5.00th=[ 59], 10.00th=[ 69], 20.00th=[ 75], 00:23:22.670 | 30.00th=[ 81], 40.00th=[ 85], 50.00th=[ 91], 60.00th=[ 97], 00:23:22.670 | 70.00th=[ 104], 80.00th=[ 112], 90.00th=[ 125], 95.00th=[ 136], 00:23:22.670 | 99.00th=[ 155], 99.50th=[ 184], 99.90th=[ 222], 99.95th=[ 222], 00:23:22.670 | 99.99th=[ 222] 00:23:22.670 bw ( KiB/s): min=120320, max=224768, per=7.70%, avg=173184.00, stdev=29162.90, samples=20 00:23:22.670 iops : min= 470, max= 878, avg=676.50, stdev=113.92, samples=20 00:23:22.670 lat (msec) : 50=2.67%, 100=61.95%, 250=35.38% 00:23:22.670 cpu : usr=0.25%, sys=2.07%, ctx=1631, majf=0, minf=4097 00:23:22.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:22.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:22.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:22.670 issued rwts: total=6828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:22.670 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:22.670 job9: (groupid=0, jobs=1): err= 0: pid=1188502: Mon Apr 15 22:50:05 2024 00:23:22.670 read: IOPS=905, BW=226MiB/s (237MB/s)(2284MiB/10092msec) 00:23:22.670 slat (usec): min=5, max=94030, avg=964.72, stdev=3356.84 00:23:22.670 clat (usec): min=1579, max=182394, avg=69654.24, stdev=36047.49 00:23:22.670 lat (usec): min=1627, max=182774, avg=70618.95, stdev=36550.32 00:23:22.670 clat percentiles (msec): 00:23:22.670 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 25], 20.00th=[ 33], 00:23:22.670 | 30.00th=[ 41], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 82], 00:23:22.670 | 70.00th=[ 90], 80.00th=[ 100], 90.00th=[ 115], 95.00th=[ 134], 00:23:22.670 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 165], 99.95th=[ 167], 00:23:22.670 | 99.99th=[ 182] 00:23:22.670 bw ( KiB/s): min=114176, max=439808, per=10.32%, avg=232262.40, stdev=86336.16, samples=20 00:23:22.670 iops : min= 446, max= 1718, avg=907.25, stdev=337.26, samples=20 00:23:22.670 lat (msec) : 2=0.01%, 4=0.28%, 10=2.22%, 20=4.25%, 50=27.31% 00:23:22.670 lat (msec) : 100=46.55%, 250=19.37% 00:23:22.670 cpu : usr=0.46%, sys=2.68%, ctx=2088, majf=0, minf=4097 00:23:22.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:22.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:22.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:22.670 issued rwts: total=9136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:22.670 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:22.670 job10: (groupid=0, jobs=1): err= 0: pid=1188512: Mon Apr 15 22:50:05 2024 00:23:22.670 read: IOPS=617, BW=154MiB/s (162MB/s)(1558MiB/10089msec) 00:23:22.670 slat (usec): min=8, max=57071, avg=1378.87, stdev=3831.87 00:23:22.670 clat (msec): min=3, max=206, avg=102.10, stdev=23.49 00:23:22.670 lat (msec): min=3, max=206, avg=103.48, stdev=23.89 00:23:22.670 clat percentiles (msec): 00:23:22.670 | 1.00th=[ 23], 5.00th=[ 70], 10.00th=[ 78], 20.00th=[ 86], 00:23:22.670 | 30.00th=[ 92], 40.00th=[ 100], 50.00th=[ 104], 60.00th=[ 108], 00:23:22.670 | 70.00th=[ 112], 80.00th=[ 121], 90.00th=[ 129], 95.00th=[ 138], 00:23:22.670 | 99.00th=[ 155], 99.50th=[ 165], 99.90th=[ 197], 99.95th=[ 203], 00:23:22.670 | 99.99th=[ 207] 00:23:22.670 bw ( KiB/s): min=110080, max=199168, per=7.02%, avg=157926.40, stdev=21945.68, samples=20 00:23:22.670 iops : min= 430, max= 778, avg=616.90, stdev=85.73, samples=20 00:23:22.670 lat (msec) : 4=0.02%, 10=0.18%, 20=0.59%, 50=2.23%, 100=38.86% 00:23:22.670 lat (msec) : 250=58.12% 00:23:22.670 cpu : usr=0.19%, sys=1.97%, ctx=1530, majf=0, minf=4097 00:23:22.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:22.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:22.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:22.670 issued rwts: total=6232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:22.670 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:22.670 00:23:22.670 Run status group 0 (all jobs): 00:23:22.670 READ: bw=2197MiB/s (2304MB/s), 154MiB/s-273MiB/s (162MB/s-287MB/s), io=21.7GiB (23.3GB), run=10016-10123msec 00:23:22.670 00:23:22.670 Disk stats (read/write): 00:23:22.670 nvme0n1: ios=17350/0, merge=0/0, ticks=1220236/0, in_queue=1220236, util=96.40% 00:23:22.670 nvme10n1: ios=16839/0, merge=0/0, ticks=1218602/0, in_queue=1218602, util=96.73% 00:23:22.670 nvme1n1: ios=15748/0, merge=0/0, ticks=1221118/0, in_queue=1221118, util=97.06% 00:23:22.670 nvme2n1: ios=21270/0, merge=0/0, ticks=1224732/0, in_queue=1224732, util=97.32% 00:23:22.670 nvme3n1: ios=14543/0, merge=0/0, ticks=1220461/0, in_queue=1220461, util=97.43% 00:23:22.670 nvme4n1: ios=17697/0, merge=0/0, ticks=1218105/0, in_queue=1218105, util=97.89% 00:23:22.670 nvme5n1: ios=13195/0, merge=0/0, ticks=1213723/0, in_queue=1213723, util=98.02% 00:23:22.670 nvme6n1: ios=14069/0, merge=0/0, ticks=1216579/0, in_queue=1216579, util=98.18% 00:23:22.670 nvme7n1: ios=13655/0, merge=0/0, ticks=1248558/0, in_queue=1248558, util=98.82% 00:23:22.670 nvme8n1: ios=17952/0, merge=0/0, ticks=1218265/0, in_queue=1218265, util=98.98% 00:23:22.670 nvme9n1: ios=12176/0, merge=0/0, ticks=1216710/0, in_queue=1216710, util=99.23% 00:23:22.670 22:50:05 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:23:22.670 [global] 00:23:22.670 thread=1 00:23:22.670 invalidate=1 00:23:22.670 rw=randwrite 00:23:22.670 time_based=1 00:23:22.670 runtime=10 00:23:22.670 ioengine=libaio 00:23:22.670 direct=1 00:23:22.670 bs=262144 00:23:22.670 iodepth=64 00:23:22.670 norandommap=1 00:23:22.670 numjobs=1 00:23:22.670 00:23:22.670 [job0] 00:23:22.670 filename=/dev/nvme0n1 00:23:22.670 [job1] 00:23:22.670 filename=/dev/nvme10n1 00:23:22.670 [job2] 00:23:22.670 filename=/dev/nvme1n1 00:23:22.670 [job3] 00:23:22.670 filename=/dev/nvme2n1 00:23:22.670 [job4] 00:23:22.670 filename=/dev/nvme3n1 00:23:22.670 [job5] 00:23:22.670 filename=/dev/nvme4n1 00:23:22.670 [job6] 00:23:22.670 filename=/dev/nvme5n1 00:23:22.670 [job7] 00:23:22.670 filename=/dev/nvme6n1 00:23:22.670 [job8] 00:23:22.670 filename=/dev/nvme7n1 00:23:22.670 [job9] 00:23:22.670 filename=/dev/nvme8n1 00:23:22.670 [job10] 00:23:22.670 filename=/dev/nvme9n1 00:23:22.670 Could not set queue depth (nvme0n1) 00:23:22.670 Could not set queue depth (nvme10n1) 00:23:22.670 Could not set queue depth (nvme1n1) 00:23:22.670 Could not set queue depth (nvme2n1) 00:23:22.670 Could not set queue depth (nvme3n1) 00:23:22.670 Could not set queue depth (nvme4n1) 00:23:22.670 Could not set queue depth (nvme5n1) 00:23:22.670 Could not set queue depth (nvme6n1) 00:23:22.670 Could not set queue depth (nvme7n1) 00:23:22.670 Could not set queue depth (nvme8n1) 00:23:22.670 Could not set queue depth (nvme9n1) 00:23:22.670 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:22.670 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:22.670 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:22.670 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:22.670 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:22.670 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:22.670 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:22.670 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:22.670 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:22.670 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:22.670 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:22.670 fio-3.35 00:23:22.670 Starting 11 threads 00:23:32.678 00:23:32.678 job0: (groupid=0, jobs=1): err= 0: pid=1190311: Mon Apr 15 22:50:16 2024 00:23:32.678 write: IOPS=742, BW=186MiB/s (195MB/s)(1876MiB/10098msec); 0 zone resets 00:23:32.678 slat (usec): min=19, max=72898, avg=1249.65, stdev=2460.21 00:23:32.678 clat (msec): min=5, max=202, avg=84.83, stdev=20.48 00:23:32.678 lat (msec): min=5, max=202, avg=86.08, stdev=20.71 00:23:32.678 clat percentiles (msec): 00:23:32.678 | 1.00th=[ 19], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 68], 00:23:32.678 | 30.00th=[ 80], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 89], 00:23:32.678 | 70.00th=[ 92], 80.00th=[ 105], 90.00th=[ 108], 95.00th=[ 109], 00:23:32.678 | 99.00th=[ 128], 99.50th=[ 150], 99.90th=[ 190], 99.95th=[ 197], 00:23:32.678 | 99.99th=[ 203] 00:23:32.678 bw ( KiB/s): min=150016, max=252928, per=11.84%, avg=190454.70, stdev=33822.74, samples=20 00:23:32.678 iops : min= 586, max= 988, avg=743.95, stdev=132.13, samples=20 00:23:32.678 lat (msec) : 10=0.35%, 20=0.84%, 50=2.84%, 100=71.34%, 250=24.63% 00:23:32.678 cpu : usr=1.58%, sys=2.14%, ctx=2370, majf=0, minf=1 00:23:32.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:32.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:32.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:32.678 issued rwts: total=0,7502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:32.678 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:32.678 job1: (groupid=0, jobs=1): err= 0: pid=1190323: Mon Apr 15 22:50:16 2024 00:23:32.678 write: IOPS=491, BW=123MiB/s (129MB/s)(1241MiB/10096msec); 0 zone resets 00:23:32.678 slat (usec): min=20, max=40528, avg=1872.93, stdev=3748.45 00:23:32.678 clat (msec): min=3, max=210, avg=128.28, stdev=41.80 00:23:32.678 lat (msec): min=3, max=210, avg=130.16, stdev=42.42 00:23:32.678 clat percentiles (msec): 00:23:32.678 | 1.00th=[ 19], 5.00th=[ 51], 10.00th=[ 88], 20.00th=[ 102], 00:23:32.678 | 30.00th=[ 107], 40.00th=[ 108], 50.00th=[ 115], 60.00th=[ 138], 00:23:32.678 | 70.00th=[ 157], 80.00th=[ 165], 90.00th=[ 182], 95.00th=[ 203], 00:23:32.678 | 99.00th=[ 209], 99.50th=[ 209], 99.90th=[ 211], 99.95th=[ 211], 00:23:32.678 | 99.99th=[ 211] 00:23:32.678 bw ( KiB/s): min=81920, max=199168, per=7.80%, avg=125440.00, stdev=35070.28, samples=20 00:23:32.678 iops : min= 320, max= 778, avg=490.00, stdev=136.99, samples=20 00:23:32.678 lat (msec) : 4=0.02%, 10=0.18%, 20=0.91%, 50=3.83%, 100=10.88% 00:23:32.678 lat (msec) : 250=84.18% 00:23:32.678 cpu : usr=1.46%, sys=1.41%, ctx=1711, majf=0, minf=1 00:23:32.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:23:32.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:32.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:32.678 issued rwts: total=0,4963,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:32.678 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:32.678 job2: (groupid=0, jobs=1): err= 0: pid=1190326: Mon Apr 15 22:50:16 2024 00:23:32.678 write: IOPS=600, BW=150MiB/s (157MB/s)(1516MiB/10100msec); 0 zone resets 00:23:32.678 slat (usec): min=24, max=23078, avg=1611.62, stdev=2803.93 00:23:32.678 clat (msec): min=10, max=203, avg=104.93, stdev=10.16 00:23:32.678 lat (msec): min=10, max=203, avg=106.54, stdev= 9.98 00:23:32.678 clat percentiles (msec): 00:23:32.678 | 1.00th=[ 75], 5.00th=[ 99], 10.00th=[ 100], 20.00th=[ 101], 00:23:32.678 | 30.00th=[ 105], 40.00th=[ 106], 50.00th=[ 106], 60.00th=[ 107], 00:23:32.678 | 70.00th=[ 107], 80.00th=[ 108], 90.00th=[ 110], 95.00th=[ 116], 00:23:32.678 | 99.00th=[ 130], 99.50th=[ 150], 99.90th=[ 192], 99.95th=[ 199], 00:23:32.678 | 99.99th=[ 205] 00:23:32.678 bw ( KiB/s): min=145408, max=165376, per=9.55%, avg=153651.20, stdev=4344.15, samples=20 00:23:32.678 iops : min= 568, max= 646, avg=600.20, stdev=16.97, samples=20 00:23:32.678 lat (msec) : 20=0.16%, 50=0.35%, 100=16.67%, 250=82.82% 00:23:32.678 cpu : usr=1.26%, sys=1.85%, ctx=1668, majf=0, minf=1 00:23:32.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:32.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:32.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:32.678 issued rwts: total=0,6065,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:32.678 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:32.678 job3: (groupid=0, jobs=1): err= 0: pid=1190327: Mon Apr 15 22:50:16 2024 00:23:32.678 write: IOPS=465, BW=116MiB/s (122MB/s)(1179MiB/10124msec); 0 zone resets 00:23:32.678 slat (usec): min=24, max=234991, avg=2057.30, stdev=5810.63 00:23:32.678 clat (msec): min=3, max=447, avg=135.23, stdev=43.35 00:23:32.678 lat (msec): min=4, max=447, avg=137.29, stdev=43.78 00:23:32.678 clat percentiles (msec): 00:23:32.678 | 1.00th=[ 32], 5.00th=[ 73], 10.00th=[ 84], 20.00th=[ 93], 00:23:32.678 | 30.00th=[ 124], 40.00th=[ 136], 50.00th=[ 140], 60.00th=[ 150], 00:23:32.678 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 180], 00:23:32.678 | 99.00th=[ 239], 99.50th=[ 355], 99.90th=[ 439], 99.95th=[ 447], 00:23:32.678 | 99.99th=[ 447] 00:23:32.678 bw ( KiB/s): min=83968, max=189952, per=7.41%, avg=119142.40, stdev=30694.88, samples=20 00:23:32.678 iops : min= 328, max= 742, avg=465.40, stdev=119.90, samples=20 00:23:32.678 lat (msec) : 4=0.02%, 10=0.17%, 20=0.30%, 50=2.23%, 100=20.56% 00:23:32.678 lat (msec) : 250=75.81%, 500=0.91% 00:23:32.678 cpu : usr=1.09%, sys=1.22%, ctx=1345, majf=0, minf=1 00:23:32.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:32.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:32.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:32.678 issued rwts: total=0,4717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:32.678 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:32.678 job4: (groupid=0, jobs=1): err= 0: pid=1190329: Mon Apr 15 22:50:16 2024 00:23:32.678 write: IOPS=776, BW=194MiB/s (204MB/s)(1961MiB/10096msec); 0 zone resets 00:23:32.678 slat (usec): min=19, max=13310, avg=1207.06, stdev=2228.70 00:23:32.678 clat (msec): min=3, max=203, avg=81.16, stdev=20.31 00:23:32.678 lat (msec): min=3, max=203, avg=82.37, stdev=20.58 00:23:32.678 clat percentiles (msec): 00:23:32.678 | 1.00th=[ 21], 5.00th=[ 50], 10.00th=[ 58], 20.00th=[ 67], 00:23:32.678 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 89], 00:23:32.678 | 70.00th=[ 90], 80.00th=[ 101], 90.00th=[ 107], 95.00th=[ 108], 00:23:32.678 | 99.00th=[ 110], 99.50th=[ 136], 99.90th=[ 190], 99.95th=[ 197], 00:23:32.678 | 99.99th=[ 203] 00:23:32.678 bw ( KiB/s): min=150016, max=297472, per=12.38%, avg=199160.05, stdev=41828.07, samples=20 00:23:32.678 iops : min= 586, max= 1162, avg=777.95, stdev=163.40, samples=20 00:23:32.678 lat (msec) : 4=0.01%, 10=0.23%, 20=0.74%, 50=4.08%, 100=75.39% 00:23:32.678 lat (msec) : 250=19.55% 00:23:32.678 cpu : usr=1.82%, sys=1.92%, ctx=2498, majf=0, minf=1 00:23:32.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:32.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:32.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:32.678 issued rwts: total=0,7842,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:32.678 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:32.678 job5: (groupid=0, jobs=1): err= 0: pid=1190330: Mon Apr 15 22:50:16 2024 00:23:32.678 write: IOPS=612, BW=153MiB/s (160MB/s)(1545MiB/10097msec); 0 zone resets 00:23:32.678 slat (usec): min=20, max=89561, avg=1560.05, stdev=3082.92 00:23:32.678 clat (msec): min=16, max=206, avg=102.59, stdev=17.66 00:23:32.678 lat (msec): min=18, max=206, avg=104.15, stdev=17.65 00:23:32.678 clat percentiles (msec): 00:23:32.678 | 1.00th=[ 41], 5.00th=[ 66], 10.00th=[ 97], 20.00th=[ 101], 00:23:32.678 | 30.00th=[ 103], 40.00th=[ 106], 50.00th=[ 106], 60.00th=[ 107], 00:23:32.678 | 70.00th=[ 107], 80.00th=[ 108], 90.00th=[ 110], 95.00th=[ 118], 00:23:32.678 | 99.00th=[ 169], 99.50th=[ 190], 99.90th=[ 205], 99.95th=[ 205], 00:23:32.678 | 99.99th=[ 207] 00:23:32.678 bw ( KiB/s): min=128000, max=230861, per=9.74%, avg=156618.25, stdev=18948.33, samples=20 00:23:32.678 iops : min= 500, max= 901, avg=611.75, stdev=73.85, samples=20 00:23:32.679 lat (msec) : 20=0.10%, 50=1.83%, 100=20.45%, 250=77.62% 00:23:32.679 cpu : usr=1.33%, sys=1.92%, ctx=1815, majf=0, minf=1 00:23:32.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:32.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:32.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:32.679 issued rwts: total=0,6180,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:32.679 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:32.679 job6: (groupid=0, jobs=1): err= 0: pid=1190331: Mon Apr 15 22:50:16 2024 00:23:32.679 write: IOPS=656, BW=164MiB/s (172MB/s)(1657MiB/10098msec); 0 zone resets 00:23:32.679 slat (usec): min=21, max=28500, avg=1390.08, stdev=2757.46 00:23:32.679 clat (msec): min=3, max=204, avg=96.10, stdev=34.85 00:23:32.679 lat (msec): min=3, max=204, avg=97.49, stdev=35.31 00:23:32.679 clat percentiles (msec): 00:23:32.679 | 1.00th=[ 13], 5.00th=[ 40], 10.00th=[ 72], 20.00th=[ 83], 00:23:32.679 | 30.00th=[ 85], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 96], 00:23:32.679 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 127], 95.00th=[ 186], 00:23:32.679 | 99.00th=[ 199], 99.50th=[ 203], 99.90th=[ 203], 99.95th=[ 205], 00:23:32.679 | 99.99th=[ 205] 00:23:32.679 bw ( KiB/s): min=81920, max=279552, per=10.45%, avg=168057.20, stdev=41198.25, samples=20 00:23:32.679 iops : min= 320, max= 1092, avg=656.45, stdev=160.92, samples=20 00:23:32.679 lat (msec) : 4=0.02%, 10=0.50%, 20=1.74%, 50=4.36%, 100=57.91% 00:23:32.679 lat (msec) : 250=35.48% 00:23:32.679 cpu : usr=1.64%, sys=2.15%, ctx=2258, majf=0, minf=1 00:23:32.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:23:32.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:32.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:32.679 issued rwts: total=0,6627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:32.679 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:32.679 job7: (groupid=0, jobs=1): err= 0: pid=1190332: Mon Apr 15 22:50:16 2024 00:23:32.679 write: IOPS=621, BW=155MiB/s (163MB/s)(1568MiB/10099msec); 0 zone resets 00:23:32.679 slat (usec): min=20, max=10615, avg=1513.70, stdev=2691.70 00:23:32.679 clat (msec): min=12, max=201, avg=101.50, stdev=13.65 00:23:32.679 lat (msec): min=12, max=201, avg=103.02, stdev=13.65 00:23:32.679 clat percentiles (msec): 00:23:32.679 | 1.00th=[ 56], 5.00th=[ 71], 10.00th=[ 83], 20.00th=[ 100], 00:23:32.679 | 30.00th=[ 102], 40.00th=[ 106], 50.00th=[ 106], 60.00th=[ 107], 00:23:32.679 | 70.00th=[ 107], 80.00th=[ 108], 90.00th=[ 109], 95.00th=[ 111], 00:23:32.679 | 99.00th=[ 122], 99.50th=[ 148], 99.90th=[ 188], 99.95th=[ 197], 00:23:32.679 | 99.99th=[ 203] 00:23:32.679 bw ( KiB/s): min=147456, max=215470, per=9.88%, avg=158971.90, stdev=15850.53, samples=20 00:23:32.679 iops : min= 576, max= 841, avg=620.95, stdev=61.79, samples=20 00:23:32.679 lat (msec) : 20=0.13%, 50=0.75%, 100=24.38%, 250=74.74% 00:23:32.679 cpu : usr=1.34%, sys=1.75%, ctx=1878, majf=0, minf=1 00:23:32.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:32.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:32.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:32.679 issued rwts: total=0,6272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:32.679 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:32.679 job8: (groupid=0, jobs=1): err= 0: pid=1190335: Mon Apr 15 22:50:16 2024 00:23:32.679 write: IOPS=445, BW=111MiB/s (117MB/s)(1127MiB/10121msec); 0 zone resets 00:23:32.679 slat (usec): min=24, max=52502, avg=2143.62, stdev=4312.79 00:23:32.679 clat (msec): min=3, max=254, avg=141.39, stdev=42.90 00:23:32.679 lat (msec): min=4, max=254, avg=143.53, stdev=43.46 00:23:32.679 clat percentiles (msec): 00:23:32.679 | 1.00th=[ 19], 5.00th=[ 62], 10.00th=[ 79], 20.00th=[ 118], 00:23:32.679 | 30.00th=[ 133], 40.00th=[ 138], 50.00th=[ 142], 60.00th=[ 155], 00:23:32.679 | 70.00th=[ 165], 80.00th=[ 176], 90.00th=[ 194], 95.00th=[ 211], 00:23:32.679 | 99.00th=[ 218], 99.50th=[ 222], 99.90th=[ 247], 99.95th=[ 247], 00:23:32.679 | 99.99th=[ 255] 00:23:32.679 bw ( KiB/s): min=79872, max=212480, per=7.08%, avg=113817.60, stdev=32924.72, samples=20 00:23:32.679 iops : min= 312, max= 830, avg=444.60, stdev=128.61, samples=20 00:23:32.679 lat (msec) : 4=0.02%, 10=0.38%, 20=0.84%, 50=2.73%, 100=12.09% 00:23:32.679 lat (msec) : 250=83.90%, 500=0.04% 00:23:32.679 cpu : usr=0.89%, sys=1.39%, ctx=1403, majf=0, minf=1 00:23:32.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:23:32.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:32.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:32.679 issued rwts: total=0,4509,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:32.679 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:32.679 job9: (groupid=0, jobs=1): err= 0: pid=1190336: Mon Apr 15 22:50:16 2024 00:23:32.679 write: IOPS=439, BW=110MiB/s (115MB/s)(1111MiB/10124msec); 0 zone resets 00:23:32.679 slat (usec): min=22, max=58355, avg=2192.70, stdev=4299.96 00:23:32.679 clat (msec): min=3, max=253, avg=143.46, stdev=34.55 00:23:32.679 lat (msec): min=4, max=253, avg=145.65, stdev=34.91 00:23:32.679 clat percentiles (msec): 00:23:32.679 | 1.00th=[ 15], 5.00th=[ 87], 10.00th=[ 107], 20.00th=[ 127], 00:23:32.679 | 30.00th=[ 136], 40.00th=[ 138], 50.00th=[ 144], 60.00th=[ 155], 00:23:32.679 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 184], 95.00th=[ 194], 00:23:32.679 | 99.00th=[ 201], 99.50th=[ 203], 99.90th=[ 245], 99.95th=[ 245], 00:23:32.679 | 99.99th=[ 253] 00:23:32.679 bw ( KiB/s): min=83968, max=154112, per=6.97%, avg=112179.20, stdev=20981.74, samples=20 00:23:32.679 iops : min= 328, max= 602, avg=438.20, stdev=81.96, samples=20 00:23:32.679 lat (msec) : 4=0.02%, 10=0.49%, 20=0.97%, 50=1.15%, 100=4.68% 00:23:32.679 lat (msec) : 250=92.64%, 500=0.04% 00:23:32.679 cpu : usr=1.06%, sys=1.12%, ctx=1305, majf=0, minf=1 00:23:32.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:23:32.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:32.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:32.679 issued rwts: total=0,4445,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:32.679 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:32.679 job10: (groupid=0, jobs=1): err= 0: pid=1190337: Mon Apr 15 22:50:16 2024 00:23:32.679 write: IOPS=443, BW=111MiB/s (116MB/s)(1121MiB/10122msec); 0 zone resets 00:23:32.679 slat (usec): min=26, max=23212, avg=2201.82, stdev=3992.24 00:23:32.679 clat (msec): min=13, max=255, avg=142.18, stdev=32.23 00:23:32.679 lat (msec): min=13, max=255, avg=144.39, stdev=32.53 00:23:32.679 clat percentiles (msec): 00:23:32.679 | 1.00th=[ 57], 5.00th=[ 83], 10.00th=[ 96], 20.00th=[ 125], 00:23:32.679 | 30.00th=[ 133], 40.00th=[ 138], 50.00th=[ 140], 60.00th=[ 153], 00:23:32.679 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 182], 95.00th=[ 192], 00:23:32.679 | 99.00th=[ 201], 99.50th=[ 203], 99.90th=[ 245], 99.95th=[ 245], 00:23:32.679 | 99.99th=[ 255] 00:23:32.679 bw ( KiB/s): min=86016, max=176640, per=7.04%, avg=113203.20, stdev=23849.99, samples=20 00:23:32.679 iops : min= 336, max= 690, avg=442.20, stdev=93.16, samples=20 00:23:32.679 lat (msec) : 20=0.16%, 50=0.74%, 100=10.79%, 250=88.27%, 500=0.04% 00:23:32.679 cpu : usr=1.07%, sys=1.38%, ctx=1243, majf=0, minf=1 00:23:32.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:23:32.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:32.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:32.679 issued rwts: total=0,4485,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:32.679 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:32.679 00:23:32.679 Run status group 0 (all jobs): 00:23:32.679 WRITE: bw=1571MiB/s (1647MB/s), 110MiB/s-194MiB/s (115MB/s-204MB/s), io=15.5GiB (16.7GB), run=10096-10124msec 00:23:32.679 00:23:32.679 Disk stats (read/write): 00:23:32.679 nvme0n1: ios=46/14991, merge=0/0, ticks=1247/1229366, in_queue=1230613, util=99.95% 00:23:32.679 nvme10n1: ios=49/9914, merge=0/0, ticks=115/1230497, in_queue=1230612, util=97.27% 00:23:32.679 nvme1n1: ios=15/12120, merge=0/0, ticks=110/1229375, in_queue=1229485, util=97.47% 00:23:32.679 nvme2n1: ios=49/9392, merge=0/0, ticks=2730/1167537, in_queue=1170267, util=99.90% 00:23:32.679 nvme3n1: ios=0/15674, merge=0/0, ticks=0/1230551, in_queue=1230551, util=97.35% 00:23:32.679 nvme4n1: ios=44/12348, merge=0/0, ticks=2133/1211909, in_queue=1214042, util=100.00% 00:23:32.679 nvme5n1: ios=0/13241, merge=0/0, ticks=0/1230551, in_queue=1230551, util=97.99% 00:23:32.679 nvme6n1: ios=0/12527, merge=0/0, ticks=0/1230541, in_queue=1230541, util=98.16% 00:23:32.679 nvme7n1: ios=44/8982, merge=0/0, ticks=2240/1223650, in_queue=1225890, util=99.90% 00:23:32.679 nvme8n1: ios=42/8846, merge=0/0, ticks=2442/1222621, in_queue=1225063, util=99.92% 00:23:32.679 nvme9n1: ios=0/8933, merge=0/0, ticks=0/1225706, in_queue=1225706, util=99.11% 00:23:32.679 22:50:16 -- target/multiconnection.sh@36 -- # sync 00:23:32.679 22:50:16 -- target/multiconnection.sh@37 -- # seq 1 11 00:23:32.679 22:50:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.679 22:50:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:32.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:32.679 22:50:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:23:32.679 22:50:17 -- common/autotest_common.sh@1198 -- # local i=0 00:23:32.679 22:50:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:32.679 22:50:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:23:32.679 22:50:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:32.679 22:50:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:23:32.679 22:50:17 -- common/autotest_common.sh@1210 -- # return 0 00:23:32.679 22:50:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:32.679 22:50:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:32.679 22:50:17 -- common/autotest_common.sh@10 -- # set +x 00:23:32.679 22:50:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:32.680 22:50:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.680 22:50:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:32.940 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:32.940 22:50:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:32.940 22:50:17 -- common/autotest_common.sh@1198 -- # local i=0 00:23:32.940 22:50:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:32.940 22:50:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:23:32.940 22:50:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:32.940 22:50:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:23:32.940 22:50:17 -- common/autotest_common.sh@1210 -- # return 0 00:23:32.940 22:50:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:32.940 22:50:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:32.940 22:50:17 -- common/autotest_common.sh@10 -- # set +x 00:23:32.940 22:50:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:32.940 22:50:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.940 22:50:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:33.201 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:33.201 22:50:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:33.201 22:50:17 -- common/autotest_common.sh@1198 -- # local i=0 00:23:33.201 22:50:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:33.201 22:50:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:23:33.201 22:50:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:33.201 22:50:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:23:33.201 22:50:17 -- common/autotest_common.sh@1210 -- # return 0 00:23:33.201 22:50:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:33.201 22:50:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.201 22:50:17 -- common/autotest_common.sh@10 -- # set +x 00:23:33.201 22:50:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.201 22:50:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:33.201 22:50:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:33.463 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:33.463 22:50:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:33.463 22:50:18 -- common/autotest_common.sh@1198 -- # local i=0 00:23:33.463 22:50:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:33.463 22:50:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:23:33.463 22:50:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:33.463 22:50:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:23:33.463 22:50:18 -- common/autotest_common.sh@1210 -- # return 0 00:23:33.463 22:50:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:33.463 22:50:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.463 22:50:18 -- common/autotest_common.sh@10 -- # set +x 00:23:33.463 22:50:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.463 22:50:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:33.463 22:50:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:33.724 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:33.724 22:50:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:33.724 22:50:18 -- common/autotest_common.sh@1198 -- # local i=0 00:23:33.724 22:50:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:33.724 22:50:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:23:33.724 22:50:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:33.724 22:50:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:23:33.724 22:50:18 -- common/autotest_common.sh@1210 -- # return 0 00:23:33.724 22:50:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:33.724 22:50:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.724 22:50:18 -- common/autotest_common.sh@10 -- # set +x 00:23:33.724 22:50:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.724 22:50:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:33.724 22:50:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:33.985 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:33.985 22:50:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:33.985 22:50:18 -- common/autotest_common.sh@1198 -- # local i=0 00:23:33.985 22:50:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:33.985 22:50:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:23:33.985 22:50:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:23:33.985 22:50:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:33.985 22:50:18 -- common/autotest_common.sh@1210 -- # return 0 00:23:33.985 22:50:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:33.985 22:50:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.985 22:50:18 -- common/autotest_common.sh@10 -- # set +x 00:23:33.985 22:50:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.985 22:50:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:33.985 22:50:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:34.245 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:34.245 22:50:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:34.245 22:50:18 -- common/autotest_common.sh@1198 -- # local i=0 00:23:34.245 22:50:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:34.245 22:50:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:23:34.245 22:50:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:34.245 22:50:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:23:34.245 22:50:18 -- common/autotest_common.sh@1210 -- # return 0 00:23:34.245 22:50:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:34.245 22:50:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:34.245 22:50:18 -- common/autotest_common.sh@10 -- # set +x 00:23:34.245 22:50:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:34.245 22:50:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:34.245 22:50:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:34.507 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:34.507 22:50:19 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:34.507 22:50:19 -- common/autotest_common.sh@1198 -- # local i=0 00:23:34.507 22:50:19 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:34.507 22:50:19 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:23:34.507 22:50:19 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:34.507 22:50:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:23:34.507 22:50:19 -- common/autotest_common.sh@1210 -- # return 0 00:23:34.507 22:50:19 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:34.507 22:50:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:34.507 22:50:19 -- common/autotest_common.sh@10 -- # set +x 00:23:34.507 22:50:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:34.507 22:50:19 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:34.507 22:50:19 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:34.768 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:34.768 22:50:19 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:34.768 22:50:19 -- common/autotest_common.sh@1198 -- # local i=0 00:23:34.768 22:50:19 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:34.768 22:50:19 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:23:34.768 22:50:19 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:34.768 22:50:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:23:34.768 22:50:19 -- common/autotest_common.sh@1210 -- # return 0 00:23:34.768 22:50:19 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:34.768 22:50:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:34.768 22:50:19 -- common/autotest_common.sh@10 -- # set +x 00:23:34.768 22:50:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:34.768 22:50:19 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:34.768 22:50:19 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:34.768 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:35.029 22:50:19 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:35.029 22:50:19 -- common/autotest_common.sh@1198 -- # local i=0 00:23:35.029 22:50:19 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:35.029 22:50:19 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:23:35.029 22:50:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:23:35.029 22:50:19 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:35.029 22:50:19 -- common/autotest_common.sh@1210 -- # return 0 00:23:35.029 22:50:19 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:35.029 22:50:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:35.029 22:50:19 -- common/autotest_common.sh@10 -- # set +x 00:23:35.029 22:50:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:35.029 22:50:19 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:35.029 22:50:19 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:35.029 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:35.029 22:50:19 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:35.029 22:50:19 -- common/autotest_common.sh@1198 -- # local i=0 00:23:35.029 22:50:19 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:35.029 22:50:19 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:23:35.029 22:50:19 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:35.029 22:50:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:23:35.029 22:50:19 -- common/autotest_common.sh@1210 -- # return 0 00:23:35.029 22:50:19 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:35.029 22:50:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:35.029 22:50:19 -- common/autotest_common.sh@10 -- # set +x 00:23:35.029 22:50:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:35.029 22:50:19 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:35.029 22:50:19 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:35.029 22:50:19 -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:35.029 22:50:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:35.029 22:50:19 -- nvmf/common.sh@116 -- # sync 00:23:35.029 22:50:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:35.029 22:50:19 -- nvmf/common.sh@119 -- # set +e 00:23:35.029 22:50:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:35.029 22:50:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:35.029 rmmod nvme_tcp 00:23:35.029 rmmod nvme_fabrics 00:23:35.029 rmmod nvme_keyring 00:23:35.029 22:50:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:35.290 22:50:19 -- nvmf/common.sh@123 -- # set -e 00:23:35.290 22:50:19 -- nvmf/common.sh@124 -- # return 0 00:23:35.290 22:50:19 -- nvmf/common.sh@477 -- # '[' -n 1179682 ']' 00:23:35.290 22:50:19 -- nvmf/common.sh@478 -- # killprocess 1179682 00:23:35.290 22:50:19 -- common/autotest_common.sh@926 -- # '[' -z 1179682 ']' 00:23:35.290 22:50:19 -- common/autotest_common.sh@930 -- # kill -0 1179682 00:23:35.290 22:50:19 -- common/autotest_common.sh@931 -- # uname 00:23:35.290 22:50:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:35.290 22:50:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1179682 00:23:35.290 22:50:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:35.290 22:50:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:35.290 22:50:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1179682' 00:23:35.290 killing process with pid 1179682 00:23:35.290 22:50:19 -- common/autotest_common.sh@945 -- # kill 1179682 00:23:35.290 22:50:19 -- common/autotest_common.sh@950 -- # wait 1179682 00:23:35.551 22:50:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:35.551 22:50:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:35.551 22:50:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:35.551 22:50:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:35.551 22:50:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:35.551 22:50:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.551 22:50:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.551 22:50:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.465 22:50:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:37.465 00:23:37.465 real 1m17.760s 00:23:37.465 user 4m54.575s 00:23:37.465 sys 0m21.476s 00:23:37.465 22:50:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:37.465 22:50:22 -- common/autotest_common.sh@10 -- # set +x 00:23:37.465 ************************************ 00:23:37.465 END TEST nvmf_multiconnection 00:23:37.465 ************************************ 00:23:37.727 22:50:22 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:37.727 22:50:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:37.727 22:50:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:37.727 22:50:22 -- common/autotest_common.sh@10 -- # set +x 00:23:37.727 ************************************ 00:23:37.727 START TEST nvmf_initiator_timeout 00:23:37.727 ************************************ 00:23:37.727 22:50:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:37.727 * Looking for test storage... 00:23:37.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:37.727 22:50:22 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:37.727 22:50:22 -- nvmf/common.sh@7 -- # uname -s 00:23:37.727 22:50:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.727 22:50:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.727 22:50:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.727 22:50:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.727 22:50:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.727 22:50:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.727 22:50:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.727 22:50:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.727 22:50:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.727 22:50:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.727 22:50:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:37.727 22:50:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:37.727 22:50:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.727 22:50:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.727 22:50:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:37.727 22:50:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:37.727 22:50:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.727 22:50:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.727 22:50:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.727 22:50:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.727 22:50:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.727 22:50:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.727 22:50:22 -- paths/export.sh@5 -- # export PATH 00:23:37.727 22:50:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.727 22:50:22 -- nvmf/common.sh@46 -- # : 0 00:23:37.727 22:50:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:37.727 22:50:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:37.727 22:50:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:37.727 22:50:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.727 22:50:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.727 22:50:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:37.727 22:50:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:37.728 22:50:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:37.728 22:50:22 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:37.728 22:50:22 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:37.728 22:50:22 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:37.728 22:50:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:37.728 22:50:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:37.728 22:50:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:37.728 22:50:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:37.728 22:50:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:37.728 22:50:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.728 22:50:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.728 22:50:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.728 22:50:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:37.728 22:50:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:37.728 22:50:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:37.728 22:50:22 -- common/autotest_common.sh@10 -- # set +x 00:23:45.944 22:50:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:45.944 22:50:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:45.944 22:50:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:45.944 22:50:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:45.944 22:50:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:45.944 22:50:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:45.944 22:50:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:45.944 22:50:30 -- nvmf/common.sh@294 -- # net_devs=() 00:23:45.944 22:50:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:45.944 22:50:30 -- nvmf/common.sh@295 -- # e810=() 00:23:45.944 22:50:30 -- nvmf/common.sh@295 -- # local -ga e810 00:23:45.944 22:50:30 -- nvmf/common.sh@296 -- # x722=() 00:23:45.944 22:50:30 -- nvmf/common.sh@296 -- # local -ga x722 00:23:45.944 22:50:30 -- nvmf/common.sh@297 -- # mlx=() 00:23:45.944 22:50:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:45.944 22:50:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.944 22:50:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.944 22:50:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.944 22:50:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.944 22:50:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.944 22:50:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.944 22:50:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.944 22:50:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.944 22:50:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.944 22:50:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.944 22:50:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.944 22:50:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:45.944 22:50:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:45.944 22:50:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:45.944 22:50:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:45.945 22:50:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:45.945 22:50:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:45.945 22:50:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:45.945 22:50:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:45.945 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:45.945 22:50:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:45.945 22:50:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:45.945 22:50:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.945 22:50:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.945 22:50:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:45.945 22:50:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:45.945 22:50:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:45.945 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:45.945 22:50:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:45.945 22:50:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:45.945 22:50:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.945 22:50:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.945 22:50:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:45.945 22:50:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:45.945 22:50:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:45.945 22:50:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:45.945 22:50:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:45.945 22:50:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.945 22:50:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:45.945 22:50:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.945 22:50:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:45.945 Found net devices under 0000:31:00.0: cvl_0_0 00:23:45.945 22:50:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.945 22:50:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:45.945 22:50:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.945 22:50:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:45.945 22:50:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.945 22:50:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:45.945 Found net devices under 0000:31:00.1: cvl_0_1 00:23:45.945 22:50:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.945 22:50:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:45.945 22:50:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:45.945 22:50:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:45.945 22:50:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:45.945 22:50:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:45.945 22:50:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.945 22:50:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.945 22:50:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.945 22:50:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:45.945 22:50:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.945 22:50:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.945 22:50:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:45.945 22:50:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.945 22:50:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.945 22:50:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:45.945 22:50:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:45.945 22:50:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.945 22:50:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.945 22:50:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:45.945 22:50:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:45.945 22:50:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:45.945 22:50:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.945 22:50:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:45.945 22:50:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:45.945 22:50:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:45.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:23:45.945 00:23:45.945 --- 10.0.0.2 ping statistics --- 00:23:45.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.945 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:23:45.945 22:50:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:45.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:23:45.945 00:23:45.945 --- 10.0.0.1 ping statistics --- 00:23:45.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.945 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:23:45.945 22:50:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.945 22:50:30 -- nvmf/common.sh@410 -- # return 0 00:23:45.945 22:50:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:45.945 22:50:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.945 22:50:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:45.945 22:50:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:45.945 22:50:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.945 22:50:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:45.945 22:50:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:45.945 22:50:30 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:23:45.945 22:50:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:45.945 22:50:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:45.945 22:50:30 -- common/autotest_common.sh@10 -- # set +x 00:23:45.945 22:50:30 -- nvmf/common.sh@469 -- # nvmfpid=1197003 00:23:45.945 22:50:30 -- nvmf/common.sh@470 -- # waitforlisten 1197003 00:23:45.945 22:50:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:45.945 22:50:30 -- common/autotest_common.sh@819 -- # '[' -z 1197003 ']' 00:23:45.945 22:50:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.945 22:50:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:45.945 22:50:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.945 22:50:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:45.945 22:50:30 -- common/autotest_common.sh@10 -- # set +x 00:23:45.945 [2024-04-15 22:50:30.576716] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:23:45.945 [2024-04-15 22:50:30.576773] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.945 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.945 [2024-04-15 22:50:30.654174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:45.945 [2024-04-15 22:50:30.718519] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:45.945 [2024-04-15 22:50:30.718657] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.945 [2024-04-15 22:50:30.718668] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.945 [2024-04-15 22:50:30.718677] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.945 [2024-04-15 22:50:30.718814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.945 [2024-04-15 22:50:30.718949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.945 [2024-04-15 22:50:30.718997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.945 [2024-04-15 22:50:30.718997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:46.890 22:50:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:46.890 22:50:31 -- common/autotest_common.sh@852 -- # return 0 00:23:46.890 22:50:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:46.890 22:50:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:46.890 22:50:31 -- common/autotest_common.sh@10 -- # set +x 00:23:46.890 22:50:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.890 22:50:31 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:46.891 22:50:31 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:46.891 22:50:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.891 22:50:31 -- common/autotest_common.sh@10 -- # set +x 00:23:46.891 Malloc0 00:23:46.891 22:50:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.891 22:50:31 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:46.891 22:50:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.891 22:50:31 -- common/autotest_common.sh@10 -- # set +x 00:23:46.891 Delay0 00:23:46.891 22:50:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.891 22:50:31 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:46.891 22:50:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.891 22:50:31 -- common/autotest_common.sh@10 -- # set +x 00:23:46.891 [2024-04-15 22:50:31.466067] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.891 22:50:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.891 22:50:31 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:46.891 22:50:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.891 22:50:31 -- common/autotest_common.sh@10 -- # set +x 00:23:46.891 22:50:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.891 22:50:31 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:46.891 22:50:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.891 22:50:31 -- common/autotest_common.sh@10 -- # set +x 00:23:46.891 22:50:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.891 22:50:31 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:46.891 22:50:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.891 22:50:31 -- common/autotest_common.sh@10 -- # set +x 00:23:46.891 [2024-04-15 22:50:31.506311] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.891 22:50:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.891 22:50:31 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:48.278 22:50:32 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:48.278 22:50:32 -- common/autotest_common.sh@1177 -- # local i=0 00:23:48.278 22:50:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:48.278 22:50:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:48.278 22:50:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:50.189 22:50:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:50.472 22:50:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:50.472 22:50:34 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:23:50.472 22:50:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:50.472 22:50:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:50.472 22:50:35 -- common/autotest_common.sh@1187 -- # return 0 00:23:50.472 22:50:35 -- target/initiator_timeout.sh@35 -- # fio_pid=1198025 00:23:50.472 22:50:35 -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:50.472 22:50:35 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:50.472 [global] 00:23:50.472 thread=1 00:23:50.472 invalidate=1 00:23:50.472 rw=write 00:23:50.472 time_based=1 00:23:50.472 runtime=60 00:23:50.472 ioengine=libaio 00:23:50.472 direct=1 00:23:50.472 bs=4096 00:23:50.472 iodepth=1 00:23:50.472 norandommap=0 00:23:50.472 numjobs=1 00:23:50.472 00:23:50.472 verify_dump=1 00:23:50.472 verify_backlog=512 00:23:50.472 verify_state_save=0 00:23:50.472 do_verify=1 00:23:50.472 verify=crc32c-intel 00:23:50.472 [job0] 00:23:50.472 filename=/dev/nvme0n1 00:23:50.472 Could not set queue depth (nvme0n1) 00:23:50.740 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:50.740 fio-3.35 00:23:50.740 Starting 1 thread 00:23:53.285 22:50:38 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:53.285 22:50:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.285 22:50:38 -- common/autotest_common.sh@10 -- # set +x 00:23:53.285 true 00:23:53.285 22:50:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.285 22:50:38 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:53.285 22:50:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.285 22:50:38 -- common/autotest_common.sh@10 -- # set +x 00:23:53.285 true 00:23:53.285 22:50:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.285 22:50:38 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:53.285 22:50:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.285 22:50:38 -- common/autotest_common.sh@10 -- # set +x 00:23:53.285 true 00:23:53.285 22:50:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.285 22:50:38 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:53.285 22:50:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.285 22:50:38 -- common/autotest_common.sh@10 -- # set +x 00:23:53.285 true 00:23:53.285 22:50:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.285 22:50:38 -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:56.589 22:50:41 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:56.589 22:50:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:56.589 22:50:41 -- common/autotest_common.sh@10 -- # set +x 00:23:56.589 true 00:23:56.589 22:50:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:56.589 22:50:41 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:56.589 22:50:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:56.589 22:50:41 -- common/autotest_common.sh@10 -- # set +x 00:23:56.589 true 00:23:56.589 22:50:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:56.589 22:50:41 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:56.589 22:50:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:56.589 22:50:41 -- common/autotest_common.sh@10 -- # set +x 00:23:56.589 true 00:23:56.589 22:50:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:56.589 22:50:41 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:56.589 22:50:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:56.589 22:50:41 -- common/autotest_common.sh@10 -- # set +x 00:23:56.589 true 00:23:56.589 22:50:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:56.589 22:50:41 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:56.589 22:50:41 -- target/initiator_timeout.sh@54 -- # wait 1198025 00:24:52.868 00:24:52.868 job0: (groupid=0, jobs=1): err= 0: pid=1198213: Mon Apr 15 22:51:35 2024 00:24:52.868 read: IOPS=162, BW=649KiB/s (664kB/s)(38.0MiB/60001msec) 00:24:52.868 slat (nsec): min=6169, max=70034, avg=26877.43, stdev=4110.38 00:24:52.868 clat (usec): min=257, max=41987, avg=1060.79, stdev=930.72 00:24:52.868 lat (usec): min=283, max=42013, avg=1087.67, stdev=930.73 00:24:52.868 clat percentiles (usec): 00:24:52.868 | 1.00th=[ 676], 5.00th=[ 873], 10.00th=[ 930], 20.00th=[ 979], 00:24:52.868 | 30.00th=[ 1012], 40.00th=[ 1029], 50.00th=[ 1045], 60.00th=[ 1057], 00:24:52.868 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1205], 00:24:52.868 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[ 1369], 99.95th=[41157], 00:24:52.868 | 99.99th=[42206] 00:24:52.868 write: IOPS=170, BW=681KiB/s (697kB/s)(39.9MiB/60001msec); 0 zone resets 00:24:52.868 slat (usec): min=9, max=32394, avg=34.22, stdev=335.05 00:24:52.868 clat (usec): min=254, max=41952k, avg=4787.94, stdev=415074.90 00:24:52.868 lat (usec): min=264, max=41952k, avg=4822.16, stdev=415075.03 00:24:52.868 clat percentiles (usec): 00:24:52.868 | 1.00th=[ 437], 5.00th=[ 498], 10.00th=[ 537], 20.00th=[ 611], 00:24:52.868 | 30.00th=[ 644], 40.00th=[ 668], 50.00th=[ 685], 60.00th=[ 709], 00:24:52.868 | 70.00th=[ 742], 80.00th=[ 766], 90.00th=[ 799], 95.00th=[ 824], 00:24:52.868 | 99.00th=[ 865], 99.50th=[ 881], 99.90th=[ 914], 99.95th=[ 1012], 00:24:52.868 | 99.99th=[ 1434] 00:24:52.868 bw ( KiB/s): min= 208, max= 4096, per=100.00%, avg=2223.54, stdev=1338.87, samples=35 00:24:52.868 iops : min= 52, max= 1024, avg=555.89, stdev=334.72, samples=35 00:24:52.868 lat (usec) : 500=2.89%, 750=35.19%, 1000=25.74% 00:24:52.868 lat (msec) : 2=36.15%, 50=0.03%, >=2000=0.01% 00:24:52.868 cpu : usr=0.79%, sys=1.19%, ctx=19950, majf=0, minf=33 00:24:52.868 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:52.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.868 issued rwts: total=9728,10215,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.868 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:52.868 00:24:52.868 Run status group 0 (all jobs): 00:24:52.868 READ: bw=649KiB/s (664kB/s), 649KiB/s-649KiB/s (664kB/s-664kB/s), io=38.0MiB (39.8MB), run=60001-60001msec 00:24:52.868 WRITE: bw=681KiB/s (697kB/s), 681KiB/s-681KiB/s (697kB/s-697kB/s), io=39.9MiB (41.8MB), run=60001-60001msec 00:24:52.868 00:24:52.868 Disk stats (read/write): 00:24:52.868 nvme0n1: ios=9782/10075, merge=0/0, ticks=10726/5806, in_queue=16532, util=99.93% 00:24:52.868 22:51:35 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:52.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:52.868 22:51:35 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:52.868 22:51:35 -- common/autotest_common.sh@1198 -- # local i=0 00:24:52.868 22:51:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:52.868 22:51:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:52.868 22:51:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:52.868 22:51:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:52.868 22:51:35 -- common/autotest_common.sh@1210 -- # return 0 00:24:52.868 22:51:35 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:52.868 22:51:35 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:52.868 nvmf hotplug test: fio successful as expected 00:24:52.868 22:51:35 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:52.868 22:51:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.868 22:51:35 -- common/autotest_common.sh@10 -- # set +x 00:24:52.868 22:51:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.868 22:51:35 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:52.868 22:51:35 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:52.868 22:51:35 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:52.868 22:51:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:52.868 22:51:35 -- nvmf/common.sh@116 -- # sync 00:24:52.868 22:51:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:52.868 22:51:35 -- nvmf/common.sh@119 -- # set +e 00:24:52.868 22:51:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:52.868 22:51:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:52.868 rmmod nvme_tcp 00:24:52.868 rmmod nvme_fabrics 00:24:52.868 rmmod nvme_keyring 00:24:52.868 22:51:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:52.868 22:51:35 -- nvmf/common.sh@123 -- # set -e 00:24:52.868 22:51:35 -- nvmf/common.sh@124 -- # return 0 00:24:52.868 22:51:35 -- nvmf/common.sh@477 -- # '[' -n 1197003 ']' 00:24:52.868 22:51:35 -- nvmf/common.sh@478 -- # killprocess 1197003 00:24:52.868 22:51:35 -- common/autotest_common.sh@926 -- # '[' -z 1197003 ']' 00:24:52.868 22:51:35 -- common/autotest_common.sh@930 -- # kill -0 1197003 00:24:52.868 22:51:35 -- common/autotest_common.sh@931 -- # uname 00:24:52.868 22:51:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:52.868 22:51:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1197003 00:24:52.868 22:51:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:52.868 22:51:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:52.868 22:51:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1197003' 00:24:52.868 killing process with pid 1197003 00:24:52.868 22:51:35 -- common/autotest_common.sh@945 -- # kill 1197003 00:24:52.868 22:51:35 -- common/autotest_common.sh@950 -- # wait 1197003 00:24:52.868 22:51:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:52.868 22:51:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:52.868 22:51:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:52.868 22:51:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:52.868 22:51:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:52.868 22:51:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.868 22:51:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.868 22:51:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.440 22:51:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:53.440 00:24:53.440 real 1m15.718s 00:24:53.440 user 4m39.826s 00:24:53.440 sys 0m8.456s 00:24:53.440 22:51:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:53.440 22:51:38 -- common/autotest_common.sh@10 -- # set +x 00:24:53.440 ************************************ 00:24:53.440 END TEST nvmf_initiator_timeout 00:24:53.440 ************************************ 00:24:53.440 22:51:38 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:24:53.440 22:51:38 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:24:53.440 22:51:38 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:24:53.440 22:51:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:53.440 22:51:38 -- common/autotest_common.sh@10 -- # set +x 00:25:01.648 22:51:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:01.648 22:51:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:01.648 22:51:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:01.648 22:51:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:01.648 22:51:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:01.648 22:51:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:01.648 22:51:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:01.648 22:51:45 -- nvmf/common.sh@294 -- # net_devs=() 00:25:01.648 22:51:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:01.648 22:51:45 -- nvmf/common.sh@295 -- # e810=() 00:25:01.648 22:51:45 -- nvmf/common.sh@295 -- # local -ga e810 00:25:01.648 22:51:45 -- nvmf/common.sh@296 -- # x722=() 00:25:01.648 22:51:45 -- nvmf/common.sh@296 -- # local -ga x722 00:25:01.648 22:51:45 -- nvmf/common.sh@297 -- # mlx=() 00:25:01.648 22:51:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:01.648 22:51:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.648 22:51:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.648 22:51:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.648 22:51:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.648 22:51:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.648 22:51:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.648 22:51:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.648 22:51:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.648 22:51:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.648 22:51:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.648 22:51:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.648 22:51:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:01.648 22:51:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:01.648 22:51:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:01.648 22:51:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:01.648 22:51:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:01.648 22:51:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:01.648 22:51:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:01.648 22:51:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:01.648 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:01.648 22:51:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:01.648 22:51:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:01.648 22:51:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.648 22:51:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.648 22:51:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:01.648 22:51:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:01.648 22:51:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:01.648 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:01.648 22:51:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:01.648 22:51:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:01.648 22:51:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.648 22:51:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.648 22:51:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:01.648 22:51:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:01.648 22:51:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:01.648 22:51:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:01.648 22:51:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:01.648 22:51:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.648 22:51:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:01.648 22:51:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.648 22:51:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:01.648 Found net devices under 0000:31:00.0: cvl_0_0 00:25:01.648 22:51:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.648 22:51:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:01.648 22:51:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.648 22:51:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:01.648 22:51:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.648 22:51:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:01.648 Found net devices under 0000:31:00.1: cvl_0_1 00:25:01.648 22:51:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.648 22:51:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:01.648 22:51:45 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.648 22:51:45 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:25:01.648 22:51:45 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:01.648 22:51:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:01.648 22:51:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:01.648 22:51:45 -- common/autotest_common.sh@10 -- # set +x 00:25:01.648 ************************************ 00:25:01.648 START TEST nvmf_perf_adq 00:25:01.648 ************************************ 00:25:01.648 22:51:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:01.648 * Looking for test storage... 00:25:01.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:01.648 22:51:45 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:01.648 22:51:45 -- nvmf/common.sh@7 -- # uname -s 00:25:01.648 22:51:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.648 22:51:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.648 22:51:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.648 22:51:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.648 22:51:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.648 22:51:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.648 22:51:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.648 22:51:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.648 22:51:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.648 22:51:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.648 22:51:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:01.648 22:51:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:01.648 22:51:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:01.648 22:51:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:01.648 22:51:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:01.648 22:51:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:01.648 22:51:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.648 22:51:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.648 22:51:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.648 22:51:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.648 22:51:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.648 22:51:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.648 22:51:45 -- paths/export.sh@5 -- # export PATH 00:25:01.648 22:51:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.648 22:51:45 -- nvmf/common.sh@46 -- # : 0 00:25:01.648 22:51:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:01.648 22:51:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:01.648 22:51:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:01.648 22:51:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:01.648 22:51:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:01.648 22:51:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:01.648 22:51:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:01.648 22:51:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:01.648 22:51:45 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:01.648 22:51:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:01.649 22:51:45 -- common/autotest_common.sh@10 -- # set +x 00:25:09.794 22:51:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:09.794 22:51:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:09.794 22:51:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:09.794 22:51:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:09.794 22:51:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:09.794 22:51:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:09.794 22:51:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:09.794 22:51:53 -- nvmf/common.sh@294 -- # net_devs=() 00:25:09.794 22:51:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:09.794 22:51:53 -- nvmf/common.sh@295 -- # e810=() 00:25:09.794 22:51:53 -- nvmf/common.sh@295 -- # local -ga e810 00:25:09.794 22:51:53 -- nvmf/common.sh@296 -- # x722=() 00:25:09.794 22:51:53 -- nvmf/common.sh@296 -- # local -ga x722 00:25:09.794 22:51:53 -- nvmf/common.sh@297 -- # mlx=() 00:25:09.794 22:51:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:09.794 22:51:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:09.794 22:51:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:09.795 22:51:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:09.795 22:51:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:09.795 22:51:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:09.795 22:51:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:09.795 22:51:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:09.795 22:51:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:09.795 22:51:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:09.795 22:51:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:09.795 22:51:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:09.795 22:51:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:09.795 22:51:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:09.795 22:51:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:09.795 22:51:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:09.795 22:51:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:09.795 22:51:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:09.795 22:51:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:09.795 22:51:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:09.795 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:09.795 22:51:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:09.795 22:51:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:09.795 22:51:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.795 22:51:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.795 22:51:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:09.795 22:51:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:09.795 22:51:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:09.795 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:09.795 22:51:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:09.795 22:51:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:09.795 22:51:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.795 22:51:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.795 22:51:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:09.795 22:51:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:09.795 22:51:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:09.795 22:51:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:09.795 22:51:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:09.795 22:51:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.795 22:51:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:09.795 22:51:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.795 22:51:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:09.795 Found net devices under 0000:31:00.0: cvl_0_0 00:25:09.795 22:51:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.795 22:51:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:09.795 22:51:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.795 22:51:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:09.795 22:51:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.795 22:51:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:09.795 Found net devices under 0000:31:00.1: cvl_0_1 00:25:09.795 22:51:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.795 22:51:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:09.795 22:51:53 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:09.795 22:51:53 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:09.795 22:51:53 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:09.795 22:51:53 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:25:09.795 22:51:53 -- target/perf_adq.sh@52 -- # rmmod ice 00:25:10.395 22:51:55 -- target/perf_adq.sh@53 -- # modprobe ice 00:25:12.310 22:51:56 -- target/perf_adq.sh@54 -- # sleep 5 00:25:17.614 22:52:01 -- target/perf_adq.sh@67 -- # nvmftestinit 00:25:17.615 22:52:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:17.615 22:52:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:17.615 22:52:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:17.615 22:52:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:17.615 22:52:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:17.615 22:52:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.615 22:52:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:17.615 22:52:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.615 22:52:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:17.615 22:52:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:17.615 22:52:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:17.615 22:52:02 -- common/autotest_common.sh@10 -- # set +x 00:25:17.615 22:52:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:17.615 22:52:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:17.615 22:52:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:17.615 22:52:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:17.615 22:52:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:17.615 22:52:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:17.615 22:52:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:17.615 22:52:02 -- nvmf/common.sh@294 -- # net_devs=() 00:25:17.615 22:52:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:17.615 22:52:02 -- nvmf/common.sh@295 -- # e810=() 00:25:17.615 22:52:02 -- nvmf/common.sh@295 -- # local -ga e810 00:25:17.615 22:52:02 -- nvmf/common.sh@296 -- # x722=() 00:25:17.615 22:52:02 -- nvmf/common.sh@296 -- # local -ga x722 00:25:17.615 22:52:02 -- nvmf/common.sh@297 -- # mlx=() 00:25:17.615 22:52:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:17.615 22:52:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.615 22:52:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.615 22:52:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.615 22:52:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.615 22:52:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.615 22:52:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.615 22:52:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.615 22:52:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.615 22:52:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.615 22:52:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.615 22:52:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.615 22:52:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:17.615 22:52:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:17.615 22:52:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:17.615 22:52:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:17.615 22:52:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:17.615 22:52:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:17.615 22:52:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:17.615 22:52:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:17.615 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:17.615 22:52:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:17.615 22:52:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:17.615 22:52:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.615 22:52:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.615 22:52:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:17.615 22:52:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:17.615 22:52:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:17.615 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:17.616 22:52:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:17.616 22:52:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:17.616 22:52:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.616 22:52:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.616 22:52:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:17.616 22:52:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:17.616 22:52:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:17.616 22:52:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:17.616 22:52:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:17.616 22:52:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.616 22:52:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:17.616 22:52:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.616 22:52:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:17.616 Found net devices under 0000:31:00.0: cvl_0_0 00:25:17.616 22:52:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.616 22:52:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:17.616 22:52:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.616 22:52:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:17.616 22:52:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.616 22:52:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:17.616 Found net devices under 0000:31:00.1: cvl_0_1 00:25:17.616 22:52:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.616 22:52:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:17.616 22:52:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:17.616 22:52:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:17.616 22:52:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:17.616 22:52:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:17.616 22:52:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.616 22:52:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.616 22:52:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.616 22:52:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:17.616 22:52:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.616 22:52:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.616 22:52:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:17.616 22:52:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.616 22:52:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.616 22:52:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:17.616 22:52:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:17.617 22:52:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.617 22:52:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.617 22:52:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.617 22:52:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.617 22:52:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:17.617 22:52:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.617 22:52:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.617 22:52:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.617 22:52:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:17.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:25:17.617 00:25:17.617 --- 10.0.0.2 ping statistics --- 00:25:17.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.617 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:25:17.617 22:52:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:25:17.617 00:25:17.617 --- 10.0.0.1 ping statistics --- 00:25:17.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.617 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:25:17.617 22:52:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.617 22:52:02 -- nvmf/common.sh@410 -- # return 0 00:25:17.617 22:52:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:17.617 22:52:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.617 22:52:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:17.617 22:52:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:17.617 22:52:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.617 22:52:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:17.617 22:52:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:17.617 22:52:02 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:17.617 22:52:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:17.617 22:52:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:17.617 22:52:02 -- common/autotest_common.sh@10 -- # set +x 00:25:17.617 22:52:02 -- nvmf/common.sh@469 -- # nvmfpid=1221000 00:25:17.617 22:52:02 -- nvmf/common.sh@470 -- # waitforlisten 1221000 00:25:17.617 22:52:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:17.617 22:52:02 -- common/autotest_common.sh@819 -- # '[' -z 1221000 ']' 00:25:17.617 22:52:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.617 22:52:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:17.618 22:52:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.618 22:52:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:17.618 22:52:02 -- common/autotest_common.sh@10 -- # set +x 00:25:17.618 [2024-04-15 22:52:02.416324] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:25:17.618 [2024-04-15 22:52:02.416373] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.884 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.884 [2024-04-15 22:52:02.489888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:17.884 [2024-04-15 22:52:02.553198] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:17.884 [2024-04-15 22:52:02.553338] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.884 [2024-04-15 22:52:02.553347] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.884 [2024-04-15 22:52:02.553356] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.884 [2024-04-15 22:52:02.553553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.884 [2024-04-15 22:52:02.553655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:17.884 [2024-04-15 22:52:02.553987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:17.884 [2024-04-15 22:52:02.553989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.456 22:52:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:18.456 22:52:03 -- common/autotest_common.sh@852 -- # return 0 00:25:18.456 22:52:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:18.456 22:52:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:18.456 22:52:03 -- common/autotest_common.sh@10 -- # set +x 00:25:18.456 22:52:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.456 22:52:03 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:25:18.456 22:52:03 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:18.456 22:52:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.456 22:52:03 -- common/autotest_common.sh@10 -- # set +x 00:25:18.456 22:52:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.456 22:52:03 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:25:18.456 22:52:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.456 22:52:03 -- common/autotest_common.sh@10 -- # set +x 00:25:18.716 22:52:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.716 22:52:03 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:18.716 22:52:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.716 22:52:03 -- common/autotest_common.sh@10 -- # set +x 00:25:18.716 [2024-04-15 22:52:03.307512] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.716 22:52:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.716 22:52:03 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:18.716 22:52:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.716 22:52:03 -- common/autotest_common.sh@10 -- # set +x 00:25:18.716 Malloc1 00:25:18.716 22:52:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.716 22:52:03 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:18.716 22:52:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.716 22:52:03 -- common/autotest_common.sh@10 -- # set +x 00:25:18.716 22:52:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.716 22:52:03 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:18.716 22:52:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.716 22:52:03 -- common/autotest_common.sh@10 -- # set +x 00:25:18.716 22:52:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.716 22:52:03 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:18.716 22:52:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.716 22:52:03 -- common/autotest_common.sh@10 -- # set +x 00:25:18.716 [2024-04-15 22:52:03.366946] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.717 22:52:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.717 22:52:03 -- target/perf_adq.sh@73 -- # perfpid=1221049 00:25:18.717 22:52:03 -- target/perf_adq.sh@74 -- # sleep 2 00:25:18.717 22:52:03 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:18.717 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.630 22:52:05 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:25:20.630 22:52:05 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:20.630 22:52:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:20.630 22:52:05 -- target/perf_adq.sh@76 -- # wc -l 00:25:20.630 22:52:05 -- common/autotest_common.sh@10 -- # set +x 00:25:20.630 22:52:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:20.630 22:52:05 -- target/perf_adq.sh@76 -- # count=4 00:25:20.630 22:52:05 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:25:20.630 22:52:05 -- target/perf_adq.sh@81 -- # wait 1221049 00:25:28.770 Initializing NVMe Controllers 00:25:28.770 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:28.770 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:28.770 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:28.770 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:28.770 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:28.770 Initialization complete. Launching workers. 00:25:28.770 ======================================================== 00:25:28.770 Latency(us) 00:25:28.770 Device Information : IOPS MiB/s Average min max 00:25:28.770 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11658.60 45.54 5489.78 886.37 9458.38 00:25:28.770 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15828.20 61.83 4043.03 1257.20 9007.66 00:25:28.770 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14995.00 58.57 4281.20 1172.51 45178.56 00:25:28.770 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11426.10 44.63 5600.90 1318.29 11813.04 00:25:28.770 ======================================================== 00:25:28.771 Total : 53907.89 210.58 4752.37 886.37 45178.56 00:25:28.771 00:25:29.033 22:52:13 -- target/perf_adq.sh@82 -- # nvmftestfini 00:25:29.033 22:52:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:29.033 22:52:13 -- nvmf/common.sh@116 -- # sync 00:25:29.033 22:52:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:29.033 22:52:13 -- nvmf/common.sh@119 -- # set +e 00:25:29.033 22:52:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:29.033 22:52:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:29.033 rmmod nvme_tcp 00:25:29.033 rmmod nvme_fabrics 00:25:29.033 rmmod nvme_keyring 00:25:29.033 22:52:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:29.033 22:52:13 -- nvmf/common.sh@123 -- # set -e 00:25:29.033 22:52:13 -- nvmf/common.sh@124 -- # return 0 00:25:29.033 22:52:13 -- nvmf/common.sh@477 -- # '[' -n 1221000 ']' 00:25:29.033 22:52:13 -- nvmf/common.sh@478 -- # killprocess 1221000 00:25:29.033 22:52:13 -- common/autotest_common.sh@926 -- # '[' -z 1221000 ']' 00:25:29.033 22:52:13 -- common/autotest_common.sh@930 -- # kill -0 1221000 00:25:29.033 22:52:13 -- common/autotest_common.sh@931 -- # uname 00:25:29.033 22:52:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:29.033 22:52:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1221000 00:25:29.033 22:52:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:29.033 22:52:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:29.033 22:52:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1221000' 00:25:29.033 killing process with pid 1221000 00:25:29.033 22:52:13 -- common/autotest_common.sh@945 -- # kill 1221000 00:25:29.033 22:52:13 -- common/autotest_common.sh@950 -- # wait 1221000 00:25:29.294 22:52:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:29.294 22:52:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:29.294 22:52:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:29.294 22:52:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:29.294 22:52:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:29.294 22:52:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.294 22:52:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:29.294 22:52:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.210 22:52:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:31.211 22:52:15 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:25:31.211 22:52:15 -- target/perf_adq.sh@52 -- # rmmod ice 00:25:32.640 22:52:17 -- target/perf_adq.sh@53 -- # modprobe ice 00:25:35.185 22:52:19 -- target/perf_adq.sh@54 -- # sleep 5 00:25:40.478 22:52:24 -- target/perf_adq.sh@87 -- # nvmftestinit 00:25:40.478 22:52:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:40.478 22:52:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.478 22:52:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:40.478 22:52:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:40.478 22:52:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:40.478 22:52:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.478 22:52:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:40.478 22:52:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.478 22:52:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:40.478 22:52:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:40.478 22:52:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:40.478 22:52:24 -- common/autotest_common.sh@10 -- # set +x 00:25:40.478 22:52:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:40.478 22:52:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:40.478 22:52:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:40.478 22:52:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:40.478 22:52:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:40.478 22:52:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:40.478 22:52:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:40.478 22:52:24 -- nvmf/common.sh@294 -- # net_devs=() 00:25:40.478 22:52:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:40.478 22:52:24 -- nvmf/common.sh@295 -- # e810=() 00:25:40.478 22:52:24 -- nvmf/common.sh@295 -- # local -ga e810 00:25:40.478 22:52:24 -- nvmf/common.sh@296 -- # x722=() 00:25:40.478 22:52:24 -- nvmf/common.sh@296 -- # local -ga x722 00:25:40.478 22:52:24 -- nvmf/common.sh@297 -- # mlx=() 00:25:40.478 22:52:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:40.478 22:52:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.478 22:52:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.478 22:52:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.478 22:52:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.478 22:52:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.478 22:52:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.478 22:52:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.478 22:52:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.478 22:52:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.478 22:52:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.478 22:52:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.478 22:52:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:40.478 22:52:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:40.478 22:52:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:40.478 22:52:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:40.478 22:52:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:40.478 22:52:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:40.478 22:52:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:40.478 22:52:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:40.478 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:40.478 22:52:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:40.478 22:52:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:40.478 22:52:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.478 22:52:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.478 22:52:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:40.478 22:52:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:40.478 22:52:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:40.478 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:40.478 22:52:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:40.478 22:52:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:40.478 22:52:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.478 22:52:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.478 22:52:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:40.478 22:52:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:40.478 22:52:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:40.478 22:52:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:40.478 22:52:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:40.478 22:52:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.478 22:52:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:40.478 22:52:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.478 22:52:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:40.478 Found net devices under 0000:31:00.0: cvl_0_0 00:25:40.478 22:52:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.478 22:52:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:40.478 22:52:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.478 22:52:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:40.478 22:52:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.478 22:52:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:40.478 Found net devices under 0000:31:00.1: cvl_0_1 00:25:40.478 22:52:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.478 22:52:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:40.478 22:52:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:40.478 22:52:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:40.478 22:52:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:40.478 22:52:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:40.478 22:52:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.478 22:52:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.478 22:52:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.478 22:52:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:40.478 22:52:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:40.478 22:52:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:40.478 22:52:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:40.478 22:52:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:40.478 22:52:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.478 22:52:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:40.478 22:52:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:40.478 22:52:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:40.478 22:52:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:40.478 22:52:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:40.478 22:52:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:40.478 22:52:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:40.478 22:52:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:40.478 22:52:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:40.478 22:52:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:40.478 22:52:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:40.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:25:40.478 00:25:40.478 --- 10.0.0.2 ping statistics --- 00:25:40.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.478 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:25:40.479 22:52:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:40.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:25:40.479 00:25:40.479 --- 10.0.0.1 ping statistics --- 00:25:40.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.479 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:25:40.479 22:52:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.479 22:52:24 -- nvmf/common.sh@410 -- # return 0 00:25:40.479 22:52:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:40.479 22:52:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.479 22:52:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:40.479 22:52:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:40.479 22:52:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.479 22:52:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:40.479 22:52:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:40.479 22:52:24 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:25:40.479 22:52:24 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:25:40.479 22:52:24 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:25:40.479 22:52:24 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:25:40.479 net.core.busy_poll = 1 00:25:40.479 22:52:24 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:25:40.479 net.core.busy_read = 1 00:25:40.479 22:52:24 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:25:40.479 22:52:24 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:25:40.479 22:52:25 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:25:40.479 22:52:25 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:25:40.479 22:52:25 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:25:40.479 22:52:25 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:40.479 22:52:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:40.479 22:52:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:40.479 22:52:25 -- common/autotest_common.sh@10 -- # set +x 00:25:40.479 22:52:25 -- nvmf/common.sh@469 -- # nvmfpid=1225855 00:25:40.479 22:52:25 -- nvmf/common.sh@470 -- # waitforlisten 1225855 00:25:40.479 22:52:25 -- common/autotest_common.sh@819 -- # '[' -z 1225855 ']' 00:25:40.479 22:52:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:40.479 22:52:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.479 22:52:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:40.479 22:52:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.479 22:52:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:40.479 22:52:25 -- common/autotest_common.sh@10 -- # set +x 00:25:40.479 [2024-04-15 22:52:25.220320] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:25:40.479 [2024-04-15 22:52:25.220387] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.479 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.740 [2024-04-15 22:52:25.295823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:40.740 [2024-04-15 22:52:25.359440] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:40.740 [2024-04-15 22:52:25.359577] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.740 [2024-04-15 22:52:25.359588] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.740 [2024-04-15 22:52:25.359596] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.740 [2024-04-15 22:52:25.359647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.740 [2024-04-15 22:52:25.359760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:40.740 [2024-04-15 22:52:25.359898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.740 [2024-04-15 22:52:25.359899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:41.313 22:52:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:41.313 22:52:25 -- common/autotest_common.sh@852 -- # return 0 00:25:41.313 22:52:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:41.313 22:52:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:41.313 22:52:25 -- common/autotest_common.sh@10 -- # set +x 00:25:41.313 22:52:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.313 22:52:26 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:25:41.313 22:52:26 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:25:41.313 22:52:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.313 22:52:26 -- common/autotest_common.sh@10 -- # set +x 00:25:41.313 22:52:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.313 22:52:26 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:25:41.313 22:52:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.313 22:52:26 -- common/autotest_common.sh@10 -- # set +x 00:25:41.313 22:52:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.313 22:52:26 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:25:41.313 22:52:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.313 22:52:26 -- common/autotest_common.sh@10 -- # set +x 00:25:41.574 [2024-04-15 22:52:26.123494] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.574 22:52:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.574 22:52:26 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:41.574 22:52:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.574 22:52:26 -- common/autotest_common.sh@10 -- # set +x 00:25:41.574 Malloc1 00:25:41.574 22:52:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.574 22:52:26 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:41.574 22:52:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.574 22:52:26 -- common/autotest_common.sh@10 -- # set +x 00:25:41.574 22:52:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.574 22:52:26 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:41.574 22:52:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.574 22:52:26 -- common/autotest_common.sh@10 -- # set +x 00:25:41.574 22:52:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.574 22:52:26 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:41.574 22:52:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.574 22:52:26 -- common/autotest_common.sh@10 -- # set +x 00:25:41.574 [2024-04-15 22:52:26.178823] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.574 22:52:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.574 22:52:26 -- target/perf_adq.sh@94 -- # perfpid=1225908 00:25:41.574 22:52:26 -- target/perf_adq.sh@95 -- # sleep 2 00:25:41.574 22:52:26 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:41.574 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.487 22:52:28 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:25:43.487 22:52:28 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:25:43.487 22:52:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.487 22:52:28 -- target/perf_adq.sh@97 -- # wc -l 00:25:43.487 22:52:28 -- common/autotest_common.sh@10 -- # set +x 00:25:43.487 22:52:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.487 22:52:28 -- target/perf_adq.sh@97 -- # count=2 00:25:43.487 22:52:28 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:25:43.487 22:52:28 -- target/perf_adq.sh@103 -- # wait 1225908 00:25:51.644 Initializing NVMe Controllers 00:25:51.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:51.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:51.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:51.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:51.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:51.644 Initialization complete. Launching workers. 00:25:51.644 ======================================================== 00:25:51.644 Latency(us) 00:25:51.644 Device Information : IOPS MiB/s Average min max 00:25:51.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5586.80 21.82 11457.31 1567.13 57490.00 00:25:51.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6946.30 27.13 9214.20 1350.94 55159.04 00:25:51.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 20921.80 81.73 3064.45 1059.73 43766.35 00:25:51.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4748.60 18.55 13520.49 1355.47 57724.04 00:25:51.644 ======================================================== 00:25:51.644 Total : 38203.50 149.23 6709.64 1059.73 57724.04 00:25:51.644 00:25:51.644 22:52:36 -- target/perf_adq.sh@104 -- # nvmftestfini 00:25:51.644 22:52:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:51.644 22:52:36 -- nvmf/common.sh@116 -- # sync 00:25:51.644 22:52:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:51.644 22:52:36 -- nvmf/common.sh@119 -- # set +e 00:25:51.644 22:52:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:51.644 22:52:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:51.644 rmmod nvme_tcp 00:25:51.644 rmmod nvme_fabrics 00:25:51.644 rmmod nvme_keyring 00:25:51.644 22:52:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:51.644 22:52:36 -- nvmf/common.sh@123 -- # set -e 00:25:51.644 22:52:36 -- nvmf/common.sh@124 -- # return 0 00:25:51.644 22:52:36 -- nvmf/common.sh@477 -- # '[' -n 1225855 ']' 00:25:51.644 22:52:36 -- nvmf/common.sh@478 -- # killprocess 1225855 00:25:51.644 22:52:36 -- common/autotest_common.sh@926 -- # '[' -z 1225855 ']' 00:25:51.644 22:52:36 -- common/autotest_common.sh@930 -- # kill -0 1225855 00:25:51.644 22:52:36 -- common/autotest_common.sh@931 -- # uname 00:25:51.644 22:52:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:51.644 22:52:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1225855 00:25:51.906 22:52:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:51.906 22:52:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:51.906 22:52:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1225855' 00:25:51.906 killing process with pid 1225855 00:25:51.906 22:52:36 -- common/autotest_common.sh@945 -- # kill 1225855 00:25:51.906 22:52:36 -- common/autotest_common.sh@950 -- # wait 1225855 00:25:51.906 22:52:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:51.906 22:52:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:51.906 22:52:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:51.906 22:52:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:51.906 22:52:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:51.906 22:52:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.906 22:52:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.906 22:52:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.211 22:52:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:55.211 22:52:39 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:25:55.211 00:25:55.211 real 0m53.999s 00:25:55.211 user 2m49.659s 00:25:55.211 sys 0m11.089s 00:25:55.211 22:52:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:55.211 22:52:39 -- common/autotest_common.sh@10 -- # set +x 00:25:55.211 ************************************ 00:25:55.211 END TEST nvmf_perf_adq 00:25:55.211 ************************************ 00:25:55.211 22:52:39 -- nvmf/nvmf.sh@80 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:55.211 22:52:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:55.211 22:52:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:55.211 22:52:39 -- common/autotest_common.sh@10 -- # set +x 00:25:55.211 ************************************ 00:25:55.211 START TEST nvmf_shutdown 00:25:55.211 ************************************ 00:25:55.211 22:52:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:55.211 * Looking for test storage... 00:25:55.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:55.211 22:52:39 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:55.211 22:52:39 -- nvmf/common.sh@7 -- # uname -s 00:25:55.211 22:52:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:55.211 22:52:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:55.211 22:52:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:55.211 22:52:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:55.211 22:52:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:55.211 22:52:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:55.211 22:52:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:55.211 22:52:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:55.211 22:52:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:55.211 22:52:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:55.211 22:52:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:55.211 22:52:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:55.211 22:52:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:55.211 22:52:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:55.211 22:52:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:55.211 22:52:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:55.211 22:52:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:55.211 22:52:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.211 22:52:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.211 22:52:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.211 22:52:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.211 22:52:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.211 22:52:39 -- paths/export.sh@5 -- # export PATH 00:25:55.211 22:52:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.211 22:52:39 -- nvmf/common.sh@46 -- # : 0 00:25:55.211 22:52:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:55.211 22:52:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:55.211 22:52:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:55.211 22:52:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:55.211 22:52:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:55.211 22:52:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:55.211 22:52:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:55.211 22:52:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:55.211 22:52:39 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:55.211 22:52:39 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:55.211 22:52:39 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:55.211 22:52:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:55.211 22:52:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:55.211 22:52:39 -- common/autotest_common.sh@10 -- # set +x 00:25:55.211 ************************************ 00:25:55.211 START TEST nvmf_shutdown_tc1 00:25:55.211 ************************************ 00:25:55.211 22:52:39 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:25:55.211 22:52:39 -- target/shutdown.sh@74 -- # starttarget 00:25:55.211 22:52:39 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:55.211 22:52:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:55.211 22:52:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:55.211 22:52:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:55.211 22:52:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:55.211 22:52:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:55.211 22:52:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.211 22:52:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:55.211 22:52:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.211 22:52:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:55.211 22:52:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:55.211 22:52:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:55.211 22:52:39 -- common/autotest_common.sh@10 -- # set +x 00:26:03.363 22:52:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:03.363 22:52:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:03.363 22:52:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:03.363 22:52:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:03.363 22:52:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:03.363 22:52:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:03.363 22:52:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:03.363 22:52:47 -- nvmf/common.sh@294 -- # net_devs=() 00:26:03.363 22:52:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:03.363 22:52:47 -- nvmf/common.sh@295 -- # e810=() 00:26:03.363 22:52:47 -- nvmf/common.sh@295 -- # local -ga e810 00:26:03.363 22:52:47 -- nvmf/common.sh@296 -- # x722=() 00:26:03.363 22:52:47 -- nvmf/common.sh@296 -- # local -ga x722 00:26:03.363 22:52:47 -- nvmf/common.sh@297 -- # mlx=() 00:26:03.363 22:52:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:03.363 22:52:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:03.363 22:52:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:03.363 22:52:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:03.363 22:52:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:03.363 22:52:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:03.363 22:52:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:03.363 22:52:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:03.363 22:52:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:03.363 22:52:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:03.363 22:52:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:03.363 22:52:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:03.363 22:52:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:03.363 22:52:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:03.363 22:52:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:03.363 22:52:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:03.363 22:52:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:03.363 22:52:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:03.363 22:52:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:03.363 22:52:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:03.363 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:03.363 22:52:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:03.363 22:52:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:03.363 22:52:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.363 22:52:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.363 22:52:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:03.363 22:52:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:03.363 22:52:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:03.363 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:03.363 22:52:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:03.363 22:52:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:03.363 22:52:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.363 22:52:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.364 22:52:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:03.364 22:52:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:03.364 22:52:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:03.364 22:52:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:03.364 22:52:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:03.364 22:52:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.364 22:52:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:03.364 22:52:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.364 22:52:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:03.364 Found net devices under 0000:31:00.0: cvl_0_0 00:26:03.364 22:52:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.364 22:52:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:03.364 22:52:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.364 22:52:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:03.364 22:52:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.364 22:52:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:03.364 Found net devices under 0000:31:00.1: cvl_0_1 00:26:03.364 22:52:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.364 22:52:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:03.364 22:52:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:03.364 22:52:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:03.364 22:52:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:03.364 22:52:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:03.364 22:52:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:03.364 22:52:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:03.364 22:52:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:03.364 22:52:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:03.364 22:52:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:03.364 22:52:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:03.364 22:52:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:03.364 22:52:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:03.364 22:52:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:03.364 22:52:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:03.364 22:52:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:03.364 22:52:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:03.364 22:52:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:03.364 22:52:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:03.364 22:52:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:03.364 22:52:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:03.364 22:52:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:03.364 22:52:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:03.364 22:52:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:03.364 22:52:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:03.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:03.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:26:03.364 00:26:03.364 --- 10.0.0.2 ping statistics --- 00:26:03.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.364 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:26:03.364 22:52:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:03.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:03.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:26:03.364 00:26:03.364 --- 10.0.0.1 ping statistics --- 00:26:03.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.364 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:26:03.364 22:52:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:03.364 22:52:48 -- nvmf/common.sh@410 -- # return 0 00:26:03.364 22:52:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:03.364 22:52:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:03.364 22:52:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:03.364 22:52:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:03.364 22:52:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:03.364 22:52:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:03.364 22:52:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:03.364 22:52:48 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:03.364 22:52:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:03.364 22:52:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:03.364 22:52:48 -- common/autotest_common.sh@10 -- # set +x 00:26:03.364 22:52:48 -- nvmf/common.sh@469 -- # nvmfpid=1233005 00:26:03.625 22:52:48 -- nvmf/common.sh@470 -- # waitforlisten 1233005 00:26:03.625 22:52:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:03.625 22:52:48 -- common/autotest_common.sh@819 -- # '[' -z 1233005 ']' 00:26:03.625 22:52:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.625 22:52:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:03.625 22:52:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.625 22:52:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:03.625 22:52:48 -- common/autotest_common.sh@10 -- # set +x 00:26:03.625 [2024-04-15 22:52:48.221126] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:26:03.625 [2024-04-15 22:52:48.221192] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.625 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.625 [2024-04-15 22:52:48.299327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:03.625 [2024-04-15 22:52:48.373229] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:03.625 [2024-04-15 22:52:48.373361] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.625 [2024-04-15 22:52:48.373371] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.625 [2024-04-15 22:52:48.373380] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.625 [2024-04-15 22:52:48.373507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:03.625 [2024-04-15 22:52:48.373584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:03.625 [2024-04-15 22:52:48.373742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.625 [2024-04-15 22:52:48.373742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:04.196 22:52:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:04.196 22:52:48 -- common/autotest_common.sh@852 -- # return 0 00:26:04.196 22:52:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:04.196 22:52:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:04.196 22:52:48 -- common/autotest_common.sh@10 -- # set +x 00:26:04.457 22:52:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.457 22:52:49 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:04.457 22:52:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.457 22:52:49 -- common/autotest_common.sh@10 -- # set +x 00:26:04.457 [2024-04-15 22:52:49.043705] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.457 22:52:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.457 22:52:49 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:04.457 22:52:49 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:04.457 22:52:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:04.457 22:52:49 -- common/autotest_common.sh@10 -- # set +x 00:26:04.457 22:52:49 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:04.457 22:52:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.457 22:52:49 -- target/shutdown.sh@28 -- # cat 00:26:04.457 22:52:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.457 22:52:49 -- target/shutdown.sh@28 -- # cat 00:26:04.457 22:52:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.457 22:52:49 -- target/shutdown.sh@28 -- # cat 00:26:04.457 22:52:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.457 22:52:49 -- target/shutdown.sh@28 -- # cat 00:26:04.457 22:52:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.457 22:52:49 -- target/shutdown.sh@28 -- # cat 00:26:04.457 22:52:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.457 22:52:49 -- target/shutdown.sh@28 -- # cat 00:26:04.457 22:52:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.457 22:52:49 -- target/shutdown.sh@28 -- # cat 00:26:04.457 22:52:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.457 22:52:49 -- target/shutdown.sh@28 -- # cat 00:26:04.457 22:52:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.457 22:52:49 -- target/shutdown.sh@28 -- # cat 00:26:04.457 22:52:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:04.457 22:52:49 -- target/shutdown.sh@28 -- # cat 00:26:04.457 22:52:49 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:04.457 22:52:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.457 22:52:49 -- common/autotest_common.sh@10 -- # set +x 00:26:04.457 Malloc1 00:26:04.457 [2024-04-15 22:52:49.147251] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.457 Malloc2 00:26:04.457 Malloc3 00:26:04.457 Malloc4 00:26:04.719 Malloc5 00:26:04.719 Malloc6 00:26:04.719 Malloc7 00:26:04.719 Malloc8 00:26:04.719 Malloc9 00:26:04.719 Malloc10 00:26:04.719 22:52:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.719 22:52:49 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:04.719 22:52:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:04.719 22:52:49 -- common/autotest_common.sh@10 -- # set +x 00:26:04.981 22:52:49 -- target/shutdown.sh@78 -- # perfpid=1233206 00:26:04.981 22:52:49 -- target/shutdown.sh@79 -- # waitforlisten 1233206 /var/tmp/bdevperf.sock 00:26:04.981 22:52:49 -- common/autotest_common.sh@819 -- # '[' -z 1233206 ']' 00:26:04.981 22:52:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:04.981 22:52:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:04.981 22:52:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:04.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:04.981 22:52:49 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:04.981 22:52:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:04.981 22:52:49 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:04.981 22:52:49 -- common/autotest_common.sh@10 -- # set +x 00:26:04.981 22:52:49 -- nvmf/common.sh@520 -- # config=() 00:26:04.981 22:52:49 -- nvmf/common.sh@520 -- # local subsystem config 00:26:04.981 22:52:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:04.981 22:52:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:04.981 { 00:26:04.981 "params": { 00:26:04.981 "name": "Nvme$subsystem", 00:26:04.981 "trtype": "$TEST_TRANSPORT", 00:26:04.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.981 "adrfam": "ipv4", 00:26:04.981 "trsvcid": "$NVMF_PORT", 00:26:04.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.981 "hdgst": ${hdgst:-false}, 00:26:04.981 "ddgst": ${ddgst:-false} 00:26:04.981 }, 00:26:04.981 "method": "bdev_nvme_attach_controller" 00:26:04.981 } 00:26:04.981 EOF 00:26:04.981 )") 00:26:04.981 22:52:49 -- nvmf/common.sh@542 -- # cat 00:26:04.981 22:52:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:04.981 22:52:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:04.981 { 00:26:04.981 "params": { 00:26:04.981 "name": "Nvme$subsystem", 00:26:04.981 "trtype": "$TEST_TRANSPORT", 00:26:04.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.981 "adrfam": "ipv4", 00:26:04.981 "trsvcid": "$NVMF_PORT", 00:26:04.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.981 "hdgst": ${hdgst:-false}, 00:26:04.981 "ddgst": ${ddgst:-false} 00:26:04.981 }, 00:26:04.981 "method": "bdev_nvme_attach_controller" 00:26:04.981 } 00:26:04.981 EOF 00:26:04.981 )") 00:26:04.981 22:52:49 -- nvmf/common.sh@542 -- # cat 00:26:04.981 22:52:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:04.981 22:52:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:04.981 { 00:26:04.981 "params": { 00:26:04.981 "name": "Nvme$subsystem", 00:26:04.981 "trtype": "$TEST_TRANSPORT", 00:26:04.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.981 "adrfam": "ipv4", 00:26:04.981 "trsvcid": "$NVMF_PORT", 00:26:04.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.981 "hdgst": ${hdgst:-false}, 00:26:04.981 "ddgst": ${ddgst:-false} 00:26:04.981 }, 00:26:04.981 "method": "bdev_nvme_attach_controller" 00:26:04.981 } 00:26:04.982 EOF 00:26:04.982 )") 00:26:04.982 22:52:49 -- nvmf/common.sh@542 -- # cat 00:26:04.982 22:52:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:04.982 22:52:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:04.982 { 00:26:04.982 "params": { 00:26:04.982 "name": "Nvme$subsystem", 00:26:04.982 "trtype": "$TEST_TRANSPORT", 00:26:04.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.982 "adrfam": "ipv4", 00:26:04.982 "trsvcid": "$NVMF_PORT", 00:26:04.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.982 "hdgst": ${hdgst:-false}, 00:26:04.982 "ddgst": ${ddgst:-false} 00:26:04.982 }, 00:26:04.982 "method": "bdev_nvme_attach_controller" 00:26:04.982 } 00:26:04.982 EOF 00:26:04.982 )") 00:26:04.982 22:52:49 -- nvmf/common.sh@542 -- # cat 00:26:04.982 22:52:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:04.982 22:52:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:04.982 { 00:26:04.982 "params": { 00:26:04.982 "name": "Nvme$subsystem", 00:26:04.982 "trtype": "$TEST_TRANSPORT", 00:26:04.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.982 "adrfam": "ipv4", 00:26:04.982 "trsvcid": "$NVMF_PORT", 00:26:04.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.982 "hdgst": ${hdgst:-false}, 00:26:04.982 "ddgst": ${ddgst:-false} 00:26:04.982 }, 00:26:04.982 "method": "bdev_nvme_attach_controller" 00:26:04.982 } 00:26:04.982 EOF 00:26:04.982 )") 00:26:04.982 22:52:49 -- nvmf/common.sh@542 -- # cat 00:26:04.982 22:52:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:04.982 22:52:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:04.982 { 00:26:04.982 "params": { 00:26:04.982 "name": "Nvme$subsystem", 00:26:04.982 "trtype": "$TEST_TRANSPORT", 00:26:04.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.982 "adrfam": "ipv4", 00:26:04.982 "trsvcid": "$NVMF_PORT", 00:26:04.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.982 "hdgst": ${hdgst:-false}, 00:26:04.982 "ddgst": ${ddgst:-false} 00:26:04.982 }, 00:26:04.982 "method": "bdev_nvme_attach_controller" 00:26:04.982 } 00:26:04.982 EOF 00:26:04.982 )") 00:26:04.982 [2024-04-15 22:52:49.596158] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:26:04.982 [2024-04-15 22:52:49.596212] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:04.982 22:52:49 -- nvmf/common.sh@542 -- # cat 00:26:04.982 22:52:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:04.982 22:52:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:04.982 { 00:26:04.982 "params": { 00:26:04.982 "name": "Nvme$subsystem", 00:26:04.982 "trtype": "$TEST_TRANSPORT", 00:26:04.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.982 "adrfam": "ipv4", 00:26:04.982 "trsvcid": "$NVMF_PORT", 00:26:04.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.982 "hdgst": ${hdgst:-false}, 00:26:04.982 "ddgst": ${ddgst:-false} 00:26:04.982 }, 00:26:04.982 "method": "bdev_nvme_attach_controller" 00:26:04.982 } 00:26:04.982 EOF 00:26:04.982 )") 00:26:04.982 22:52:49 -- nvmf/common.sh@542 -- # cat 00:26:04.982 22:52:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:04.982 22:52:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:04.982 { 00:26:04.982 "params": { 00:26:04.982 "name": "Nvme$subsystem", 00:26:04.982 "trtype": "$TEST_TRANSPORT", 00:26:04.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.982 "adrfam": "ipv4", 00:26:04.982 "trsvcid": "$NVMF_PORT", 00:26:04.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.982 "hdgst": ${hdgst:-false}, 00:26:04.982 "ddgst": ${ddgst:-false} 00:26:04.982 }, 00:26:04.982 "method": "bdev_nvme_attach_controller" 00:26:04.982 } 00:26:04.982 EOF 00:26:04.982 )") 00:26:04.982 22:52:49 -- nvmf/common.sh@542 -- # cat 00:26:04.982 22:52:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:04.982 22:52:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:04.982 { 00:26:04.982 "params": { 00:26:04.982 "name": "Nvme$subsystem", 00:26:04.982 "trtype": "$TEST_TRANSPORT", 00:26:04.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.982 "adrfam": "ipv4", 00:26:04.982 "trsvcid": "$NVMF_PORT", 00:26:04.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.982 "hdgst": ${hdgst:-false}, 00:26:04.982 "ddgst": ${ddgst:-false} 00:26:04.982 }, 00:26:04.982 "method": "bdev_nvme_attach_controller" 00:26:04.982 } 00:26:04.982 EOF 00:26:04.982 )") 00:26:04.982 22:52:49 -- nvmf/common.sh@542 -- # cat 00:26:04.982 22:52:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:04.982 22:52:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:04.982 { 00:26:04.982 "params": { 00:26:04.982 "name": "Nvme$subsystem", 00:26:04.982 "trtype": "$TEST_TRANSPORT", 00:26:04.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.982 "adrfam": "ipv4", 00:26:04.982 "trsvcid": "$NVMF_PORT", 00:26:04.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.982 "hdgst": ${hdgst:-false}, 00:26:04.982 "ddgst": ${ddgst:-false} 00:26:04.982 }, 00:26:04.982 "method": "bdev_nvme_attach_controller" 00:26:04.982 } 00:26:04.982 EOF 00:26:04.982 )") 00:26:04.982 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.982 22:52:49 -- nvmf/common.sh@542 -- # cat 00:26:04.982 22:52:49 -- nvmf/common.sh@544 -- # jq . 00:26:04.982 22:52:49 -- nvmf/common.sh@545 -- # IFS=, 00:26:04.982 22:52:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:04.982 "params": { 00:26:04.982 "name": "Nvme1", 00:26:04.982 "trtype": "tcp", 00:26:04.982 "traddr": "10.0.0.2", 00:26:04.982 "adrfam": "ipv4", 00:26:04.982 "trsvcid": "4420", 00:26:04.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:04.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:04.982 "hdgst": false, 00:26:04.982 "ddgst": false 00:26:04.982 }, 00:26:04.982 "method": "bdev_nvme_attach_controller" 00:26:04.982 },{ 00:26:04.982 "params": { 00:26:04.982 "name": "Nvme2", 00:26:04.982 "trtype": "tcp", 00:26:04.982 "traddr": "10.0.0.2", 00:26:04.982 "adrfam": "ipv4", 00:26:04.982 "trsvcid": "4420", 00:26:04.982 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:04.982 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:04.982 "hdgst": false, 00:26:04.982 "ddgst": false 00:26:04.982 }, 00:26:04.982 "method": "bdev_nvme_attach_controller" 00:26:04.982 },{ 00:26:04.982 "params": { 00:26:04.982 "name": "Nvme3", 00:26:04.982 "trtype": "tcp", 00:26:04.982 "traddr": "10.0.0.2", 00:26:04.982 "adrfam": "ipv4", 00:26:04.982 "trsvcid": "4420", 00:26:04.982 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:04.982 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:04.982 "hdgst": false, 00:26:04.982 "ddgst": false 00:26:04.982 }, 00:26:04.982 "method": "bdev_nvme_attach_controller" 00:26:04.982 },{ 00:26:04.982 "params": { 00:26:04.982 "name": "Nvme4", 00:26:04.982 "trtype": "tcp", 00:26:04.982 "traddr": "10.0.0.2", 00:26:04.982 "adrfam": "ipv4", 00:26:04.982 "trsvcid": "4420", 00:26:04.982 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:04.982 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:04.982 "hdgst": false, 00:26:04.982 "ddgst": false 00:26:04.982 }, 00:26:04.982 "method": "bdev_nvme_attach_controller" 00:26:04.982 },{ 00:26:04.982 "params": { 00:26:04.982 "name": "Nvme5", 00:26:04.982 "trtype": "tcp", 00:26:04.982 "traddr": "10.0.0.2", 00:26:04.982 "adrfam": "ipv4", 00:26:04.982 "trsvcid": "4420", 00:26:04.982 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:04.982 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:04.982 "hdgst": false, 00:26:04.982 "ddgst": false 00:26:04.982 }, 00:26:04.982 "method": "bdev_nvme_attach_controller" 00:26:04.982 },{ 00:26:04.982 "params": { 00:26:04.982 "name": "Nvme6", 00:26:04.982 "trtype": "tcp", 00:26:04.982 "traddr": "10.0.0.2", 00:26:04.982 "adrfam": "ipv4", 00:26:04.982 "trsvcid": "4420", 00:26:04.982 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:04.982 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:04.982 "hdgst": false, 00:26:04.982 "ddgst": false 00:26:04.982 }, 00:26:04.982 "method": "bdev_nvme_attach_controller" 00:26:04.982 },{ 00:26:04.982 "params": { 00:26:04.982 "name": "Nvme7", 00:26:04.982 "trtype": "tcp", 00:26:04.982 "traddr": "10.0.0.2", 00:26:04.982 "adrfam": "ipv4", 00:26:04.982 "trsvcid": "4420", 00:26:04.982 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:04.982 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:04.982 "hdgst": false, 00:26:04.982 "ddgst": false 00:26:04.982 }, 00:26:04.982 "method": "bdev_nvme_attach_controller" 00:26:04.982 },{ 00:26:04.982 "params": { 00:26:04.982 "name": "Nvme8", 00:26:04.982 "trtype": "tcp", 00:26:04.983 "traddr": "10.0.0.2", 00:26:04.983 "adrfam": "ipv4", 00:26:04.983 "trsvcid": "4420", 00:26:04.983 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:04.983 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:04.983 "hdgst": false, 00:26:04.983 "ddgst": false 00:26:04.983 }, 00:26:04.983 "method": "bdev_nvme_attach_controller" 00:26:04.983 },{ 00:26:04.983 "params": { 00:26:04.983 "name": "Nvme9", 00:26:04.983 "trtype": "tcp", 00:26:04.983 "traddr": "10.0.0.2", 00:26:04.983 "adrfam": "ipv4", 00:26:04.983 "trsvcid": "4420", 00:26:04.983 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:04.983 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:04.983 "hdgst": false, 00:26:04.983 "ddgst": false 00:26:04.983 }, 00:26:04.983 "method": "bdev_nvme_attach_controller" 00:26:04.983 },{ 00:26:04.983 "params": { 00:26:04.983 "name": "Nvme10", 00:26:04.983 "trtype": "tcp", 00:26:04.983 "traddr": "10.0.0.2", 00:26:04.983 "adrfam": "ipv4", 00:26:04.983 "trsvcid": "4420", 00:26:04.983 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:04.983 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:04.983 "hdgst": false, 00:26:04.983 "ddgst": false 00:26:04.983 }, 00:26:04.983 "method": "bdev_nvme_attach_controller" 00:26:04.983 }' 00:26:04.983 [2024-04-15 22:52:49.663421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.983 [2024-04-15 22:52:49.727491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.952 22:52:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:06.952 22:52:51 -- common/autotest_common.sh@852 -- # return 0 00:26:06.952 22:52:51 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:06.952 22:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.952 22:52:51 -- common/autotest_common.sh@10 -- # set +x 00:26:06.952 22:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.952 22:52:51 -- target/shutdown.sh@83 -- # kill -9 1233206 00:26:06.952 22:52:51 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:06.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1233206 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:06.952 22:52:51 -- target/shutdown.sh@87 -- # sleep 1 00:26:07.894 22:52:52 -- target/shutdown.sh@88 -- # kill -0 1233005 00:26:07.894 22:52:52 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:07.894 22:52:52 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:07.894 22:52:52 -- nvmf/common.sh@520 -- # config=() 00:26:07.894 22:52:52 -- nvmf/common.sh@520 -- # local subsystem config 00:26:07.894 22:52:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:07.894 22:52:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:07.894 { 00:26:07.894 "params": { 00:26:07.894 "name": "Nvme$subsystem", 00:26:07.894 "trtype": "$TEST_TRANSPORT", 00:26:07.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.894 "adrfam": "ipv4", 00:26:07.894 "trsvcid": "$NVMF_PORT", 00:26:07.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.894 "hdgst": ${hdgst:-false}, 00:26:07.894 "ddgst": ${ddgst:-false} 00:26:07.894 }, 00:26:07.894 "method": "bdev_nvme_attach_controller" 00:26:07.894 } 00:26:07.894 EOF 00:26:07.894 )") 00:26:07.894 22:52:52 -- nvmf/common.sh@542 -- # cat 00:26:07.894 22:52:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:07.894 22:52:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:07.894 { 00:26:07.894 "params": { 00:26:07.894 "name": "Nvme$subsystem", 00:26:07.894 "trtype": "$TEST_TRANSPORT", 00:26:07.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.894 "adrfam": "ipv4", 00:26:07.894 "trsvcid": "$NVMF_PORT", 00:26:07.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.894 "hdgst": ${hdgst:-false}, 00:26:07.894 "ddgst": ${ddgst:-false} 00:26:07.894 }, 00:26:07.894 "method": "bdev_nvme_attach_controller" 00:26:07.894 } 00:26:07.894 EOF 00:26:07.894 )") 00:26:07.894 22:52:52 -- nvmf/common.sh@542 -- # cat 00:26:07.894 22:52:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:07.894 22:52:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:07.894 { 00:26:07.894 "params": { 00:26:07.894 "name": "Nvme$subsystem", 00:26:07.894 "trtype": "$TEST_TRANSPORT", 00:26:07.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.894 "adrfam": "ipv4", 00:26:07.894 "trsvcid": "$NVMF_PORT", 00:26:07.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.894 "hdgst": ${hdgst:-false}, 00:26:07.894 "ddgst": ${ddgst:-false} 00:26:07.894 }, 00:26:07.894 "method": "bdev_nvme_attach_controller" 00:26:07.894 } 00:26:07.894 EOF 00:26:07.894 )") 00:26:08.155 22:52:52 -- nvmf/common.sh@542 -- # cat 00:26:08.155 22:52:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:08.155 22:52:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:08.155 { 00:26:08.155 "params": { 00:26:08.155 "name": "Nvme$subsystem", 00:26:08.155 "trtype": "$TEST_TRANSPORT", 00:26:08.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.155 "adrfam": "ipv4", 00:26:08.155 "trsvcid": "$NVMF_PORT", 00:26:08.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.155 "hdgst": ${hdgst:-false}, 00:26:08.155 "ddgst": ${ddgst:-false} 00:26:08.155 }, 00:26:08.155 "method": "bdev_nvme_attach_controller" 00:26:08.155 } 00:26:08.155 EOF 00:26:08.155 )") 00:26:08.155 22:52:52 -- nvmf/common.sh@542 -- # cat 00:26:08.155 22:52:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:08.155 22:52:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:08.155 { 00:26:08.155 "params": { 00:26:08.155 "name": "Nvme$subsystem", 00:26:08.155 "trtype": "$TEST_TRANSPORT", 00:26:08.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.155 "adrfam": "ipv4", 00:26:08.156 "trsvcid": "$NVMF_PORT", 00:26:08.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.156 "hdgst": ${hdgst:-false}, 00:26:08.156 "ddgst": ${ddgst:-false} 00:26:08.156 }, 00:26:08.156 "method": "bdev_nvme_attach_controller" 00:26:08.156 } 00:26:08.156 EOF 00:26:08.156 )") 00:26:08.156 22:52:52 -- nvmf/common.sh@542 -- # cat 00:26:08.156 22:52:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:08.156 22:52:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:08.156 { 00:26:08.156 "params": { 00:26:08.156 "name": "Nvme$subsystem", 00:26:08.156 "trtype": "$TEST_TRANSPORT", 00:26:08.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.156 "adrfam": "ipv4", 00:26:08.156 "trsvcid": "$NVMF_PORT", 00:26:08.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.156 "hdgst": ${hdgst:-false}, 00:26:08.156 "ddgst": ${ddgst:-false} 00:26:08.156 }, 00:26:08.156 "method": "bdev_nvme_attach_controller" 00:26:08.156 } 00:26:08.156 EOF 00:26:08.156 )") 00:26:08.156 22:52:52 -- nvmf/common.sh@542 -- # cat 00:26:08.156 22:52:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:08.156 22:52:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:08.156 { 00:26:08.156 "params": { 00:26:08.156 "name": "Nvme$subsystem", 00:26:08.156 "trtype": "$TEST_TRANSPORT", 00:26:08.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.156 "adrfam": "ipv4", 00:26:08.156 "trsvcid": "$NVMF_PORT", 00:26:08.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.156 "hdgst": ${hdgst:-false}, 00:26:08.156 "ddgst": ${ddgst:-false} 00:26:08.156 }, 00:26:08.156 "method": "bdev_nvme_attach_controller" 00:26:08.156 } 00:26:08.156 EOF 00:26:08.156 )") 00:26:08.156 [2024-04-15 22:52:52.733378] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:26:08.156 [2024-04-15 22:52:52.733471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233858 ] 00:26:08.156 22:52:52 -- nvmf/common.sh@542 -- # cat 00:26:08.156 22:52:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:08.156 22:52:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:08.156 { 00:26:08.156 "params": { 00:26:08.156 "name": "Nvme$subsystem", 00:26:08.156 "trtype": "$TEST_TRANSPORT", 00:26:08.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.156 "adrfam": "ipv4", 00:26:08.156 "trsvcid": "$NVMF_PORT", 00:26:08.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.156 "hdgst": ${hdgst:-false}, 00:26:08.156 "ddgst": ${ddgst:-false} 00:26:08.156 }, 00:26:08.156 "method": "bdev_nvme_attach_controller" 00:26:08.156 } 00:26:08.156 EOF 00:26:08.156 )") 00:26:08.156 22:52:52 -- nvmf/common.sh@542 -- # cat 00:26:08.156 22:52:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:08.156 22:52:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:08.156 { 00:26:08.156 "params": { 00:26:08.156 "name": "Nvme$subsystem", 00:26:08.156 "trtype": "$TEST_TRANSPORT", 00:26:08.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.156 "adrfam": "ipv4", 00:26:08.156 "trsvcid": "$NVMF_PORT", 00:26:08.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.156 "hdgst": ${hdgst:-false}, 00:26:08.156 "ddgst": ${ddgst:-false} 00:26:08.156 }, 00:26:08.156 "method": "bdev_nvme_attach_controller" 00:26:08.156 } 00:26:08.156 EOF 00:26:08.156 )") 00:26:08.156 22:52:52 -- nvmf/common.sh@542 -- # cat 00:26:08.156 22:52:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:08.156 22:52:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:08.156 { 00:26:08.156 "params": { 00:26:08.156 "name": "Nvme$subsystem", 00:26:08.156 "trtype": "$TEST_TRANSPORT", 00:26:08.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.156 "adrfam": "ipv4", 00:26:08.156 "trsvcid": "$NVMF_PORT", 00:26:08.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.156 "hdgst": ${hdgst:-false}, 00:26:08.156 "ddgst": ${ddgst:-false} 00:26:08.156 }, 00:26:08.156 "method": "bdev_nvme_attach_controller" 00:26:08.156 } 00:26:08.156 EOF 00:26:08.156 )") 00:26:08.156 22:52:52 -- nvmf/common.sh@542 -- # cat 00:26:08.156 22:52:52 -- nvmf/common.sh@544 -- # jq . 00:26:08.156 22:52:52 -- nvmf/common.sh@545 -- # IFS=, 00:26:08.156 22:52:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:08.156 "params": { 00:26:08.156 "name": "Nvme1", 00:26:08.156 "trtype": "tcp", 00:26:08.156 "traddr": "10.0.0.2", 00:26:08.156 "adrfam": "ipv4", 00:26:08.156 "trsvcid": "4420", 00:26:08.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.156 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:08.156 "hdgst": false, 00:26:08.156 "ddgst": false 00:26:08.156 }, 00:26:08.156 "method": "bdev_nvme_attach_controller" 00:26:08.156 },{ 00:26:08.156 "params": { 00:26:08.156 "name": "Nvme2", 00:26:08.156 "trtype": "tcp", 00:26:08.156 "traddr": "10.0.0.2", 00:26:08.156 "adrfam": "ipv4", 00:26:08.156 "trsvcid": "4420", 00:26:08.156 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:08.156 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:08.156 "hdgst": false, 00:26:08.156 "ddgst": false 00:26:08.156 }, 00:26:08.156 "method": "bdev_nvme_attach_controller" 00:26:08.156 },{ 00:26:08.156 "params": { 00:26:08.156 "name": "Nvme3", 00:26:08.156 "trtype": "tcp", 00:26:08.156 "traddr": "10.0.0.2", 00:26:08.156 "adrfam": "ipv4", 00:26:08.156 "trsvcid": "4420", 00:26:08.156 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:08.156 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:08.156 "hdgst": false, 00:26:08.156 "ddgst": false 00:26:08.156 }, 00:26:08.156 "method": "bdev_nvme_attach_controller" 00:26:08.156 },{ 00:26:08.156 "params": { 00:26:08.156 "name": "Nvme4", 00:26:08.156 "trtype": "tcp", 00:26:08.156 "traddr": "10.0.0.2", 00:26:08.156 "adrfam": "ipv4", 00:26:08.156 "trsvcid": "4420", 00:26:08.156 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:08.156 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:08.156 "hdgst": false, 00:26:08.156 "ddgst": false 00:26:08.156 }, 00:26:08.156 "method": "bdev_nvme_attach_controller" 00:26:08.156 },{ 00:26:08.156 "params": { 00:26:08.156 "name": "Nvme5", 00:26:08.156 "trtype": "tcp", 00:26:08.156 "traddr": "10.0.0.2", 00:26:08.156 "adrfam": "ipv4", 00:26:08.156 "trsvcid": "4420", 00:26:08.156 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:08.156 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:08.156 "hdgst": false, 00:26:08.156 "ddgst": false 00:26:08.156 }, 00:26:08.156 "method": "bdev_nvme_attach_controller" 00:26:08.156 },{ 00:26:08.156 "params": { 00:26:08.156 "name": "Nvme6", 00:26:08.156 "trtype": "tcp", 00:26:08.156 "traddr": "10.0.0.2", 00:26:08.156 "adrfam": "ipv4", 00:26:08.156 "trsvcid": "4420", 00:26:08.156 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:08.156 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:08.156 "hdgst": false, 00:26:08.156 "ddgst": false 00:26:08.156 }, 00:26:08.156 "method": "bdev_nvme_attach_controller" 00:26:08.156 },{ 00:26:08.156 "params": { 00:26:08.156 "name": "Nvme7", 00:26:08.156 "trtype": "tcp", 00:26:08.156 "traddr": "10.0.0.2", 00:26:08.156 "adrfam": "ipv4", 00:26:08.156 "trsvcid": "4420", 00:26:08.156 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:08.156 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:08.156 "hdgst": false, 00:26:08.156 "ddgst": false 00:26:08.156 }, 00:26:08.156 "method": "bdev_nvme_attach_controller" 00:26:08.156 },{ 00:26:08.156 "params": { 00:26:08.156 "name": "Nvme8", 00:26:08.156 "trtype": "tcp", 00:26:08.156 "traddr": "10.0.0.2", 00:26:08.156 "adrfam": "ipv4", 00:26:08.156 "trsvcid": "4420", 00:26:08.156 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:08.156 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:08.156 "hdgst": false, 00:26:08.156 "ddgst": false 00:26:08.156 }, 00:26:08.156 "method": "bdev_nvme_attach_controller" 00:26:08.156 },{ 00:26:08.156 "params": { 00:26:08.156 "name": "Nvme9", 00:26:08.156 "trtype": "tcp", 00:26:08.156 "traddr": "10.0.0.2", 00:26:08.156 "adrfam": "ipv4", 00:26:08.156 "trsvcid": "4420", 00:26:08.156 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:08.156 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:08.156 "hdgst": false, 00:26:08.156 "ddgst": false 00:26:08.156 }, 00:26:08.156 "method": "bdev_nvme_attach_controller" 00:26:08.156 },{ 00:26:08.156 "params": { 00:26:08.156 "name": "Nvme10", 00:26:08.156 "trtype": "tcp", 00:26:08.156 "traddr": "10.0.0.2", 00:26:08.156 "adrfam": "ipv4", 00:26:08.156 "trsvcid": "4420", 00:26:08.156 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:08.156 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:08.156 "hdgst": false, 00:26:08.156 "ddgst": false 00:26:08.156 }, 00:26:08.156 "method": "bdev_nvme_attach_controller" 00:26:08.156 }' 00:26:08.156 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.156 [2024-04-15 22:52:52.806661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.156 [2024-04-15 22:52:52.868644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.544 Running I/O for 1 seconds... 00:26:10.502 00:26:10.502 Latency(us) 00:26:10.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.502 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:10.502 Verification LBA range: start 0x0 length 0x400 00:26:10.502 Nvme1n1 : 1.08 402.84 25.18 0.00 0.00 154893.10 25449.81 148548.27 00:26:10.502 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:10.502 Verification LBA range: start 0x0 length 0x400 00:26:10.502 Nvme2n1 : 1.10 440.96 27.56 0.00 0.00 142166.65 14636.37 138936.32 00:26:10.502 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:10.502 Verification LBA range: start 0x0 length 0x400 00:26:10.502 Nvme3n1 : 1.10 440.34 27.52 0.00 0.00 141343.55 13926.40 133693.44 00:26:10.502 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:10.502 Verification LBA range: start 0x0 length 0x400 00:26:10.502 Nvme4n1 : 1.07 404.30 25.27 0.00 0.00 150668.12 30365.01 125829.12 00:26:10.502 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:10.502 Verification LBA range: start 0x0 length 0x400 00:26:10.502 Nvme5n1 : 1.07 405.53 25.35 0.00 0.00 149436.43 27962.03 115343.36 00:26:10.502 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:10.502 Verification LBA range: start 0x0 length 0x400 00:26:10.502 Nvme6n1 : 1.10 438.16 27.38 0.00 0.00 138755.07 14745.60 115343.36 00:26:10.502 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:10.502 Verification LBA range: start 0x0 length 0x400 00:26:10.502 Nvme7n1 : 1.09 442.27 27.64 0.00 0.00 136514.45 14964.05 116217.17 00:26:10.502 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:10.502 Verification LBA range: start 0x0 length 0x400 00:26:10.502 Nvme8n1 : 1.10 440.68 27.54 0.00 0.00 135797.97 5679.79 113595.73 00:26:10.502 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:10.502 Verification LBA range: start 0x0 length 0x400 00:26:10.502 Nvme9n1 : 1.10 441.40 27.59 0.00 0.00 134832.25 5324.80 124081.49 00:26:10.502 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:10.502 Verification LBA range: start 0x0 length 0x400 00:26:10.503 Nvme10n1 : 1.11 436.99 27.31 0.00 0.00 135197.85 14090.24 122333.87 00:26:10.503 =================================================================================================================== 00:26:10.503 Total : 4293.47 268.34 0.00 0.00 141647.57 5324.80 148548.27 00:26:10.763 22:52:55 -- target/shutdown.sh@93 -- # stoptarget 00:26:10.763 22:52:55 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:10.763 22:52:55 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:10.763 22:52:55 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:10.763 22:52:55 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:10.763 22:52:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:10.763 22:52:55 -- nvmf/common.sh@116 -- # sync 00:26:10.763 22:52:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:10.763 22:52:55 -- nvmf/common.sh@119 -- # set +e 00:26:10.763 22:52:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:10.763 22:52:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:10.763 rmmod nvme_tcp 00:26:10.763 rmmod nvme_fabrics 00:26:10.763 rmmod nvme_keyring 00:26:10.763 22:52:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:10.763 22:52:55 -- nvmf/common.sh@123 -- # set -e 00:26:10.763 22:52:55 -- nvmf/common.sh@124 -- # return 0 00:26:10.763 22:52:55 -- nvmf/common.sh@477 -- # '[' -n 1233005 ']' 00:26:10.763 22:52:55 -- nvmf/common.sh@478 -- # killprocess 1233005 00:26:10.763 22:52:55 -- common/autotest_common.sh@926 -- # '[' -z 1233005 ']' 00:26:10.763 22:52:55 -- common/autotest_common.sh@930 -- # kill -0 1233005 00:26:10.763 22:52:55 -- common/autotest_common.sh@931 -- # uname 00:26:10.763 22:52:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:10.763 22:52:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1233005 00:26:10.763 22:52:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:10.763 22:52:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:10.763 22:52:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1233005' 00:26:10.763 killing process with pid 1233005 00:26:10.763 22:52:55 -- common/autotest_common.sh@945 -- # kill 1233005 00:26:10.763 22:52:55 -- common/autotest_common.sh@950 -- # wait 1233005 00:26:11.023 22:52:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:11.023 22:52:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:11.023 22:52:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:11.023 22:52:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:11.023 22:52:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:11.023 22:52:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.023 22:52:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:11.023 22:52:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.568 22:52:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:13.568 00:26:13.568 real 0m18.012s 00:26:13.568 user 0m36.653s 00:26:13.568 sys 0m7.356s 00:26:13.568 22:52:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.568 22:52:57 -- common/autotest_common.sh@10 -- # set +x 00:26:13.568 ************************************ 00:26:13.568 END TEST nvmf_shutdown_tc1 00:26:13.568 ************************************ 00:26:13.568 22:52:57 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:13.568 22:52:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:13.568 22:52:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:13.568 22:52:57 -- common/autotest_common.sh@10 -- # set +x 00:26:13.568 ************************************ 00:26:13.568 START TEST nvmf_shutdown_tc2 00:26:13.568 ************************************ 00:26:13.568 22:52:57 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:26:13.568 22:52:57 -- target/shutdown.sh@98 -- # starttarget 00:26:13.568 22:52:57 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:13.568 22:52:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:13.568 22:52:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.568 22:52:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:13.568 22:52:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:13.568 22:52:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:13.568 22:52:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.568 22:52:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.568 22:52:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.568 22:52:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:13.568 22:52:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:13.568 22:52:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:13.568 22:52:57 -- common/autotest_common.sh@10 -- # set +x 00:26:13.568 22:52:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:13.568 22:52:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:13.568 22:52:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:13.568 22:52:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:13.568 22:52:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:13.568 22:52:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:13.568 22:52:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:13.568 22:52:57 -- nvmf/common.sh@294 -- # net_devs=() 00:26:13.568 22:52:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:13.568 22:52:57 -- nvmf/common.sh@295 -- # e810=() 00:26:13.568 22:52:57 -- nvmf/common.sh@295 -- # local -ga e810 00:26:13.568 22:52:57 -- nvmf/common.sh@296 -- # x722=() 00:26:13.568 22:52:57 -- nvmf/common.sh@296 -- # local -ga x722 00:26:13.568 22:52:57 -- nvmf/common.sh@297 -- # mlx=() 00:26:13.568 22:52:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:13.568 22:52:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.568 22:52:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.568 22:52:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.568 22:52:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.568 22:52:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.568 22:52:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.568 22:52:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.568 22:52:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.568 22:52:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.568 22:52:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.568 22:52:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.568 22:52:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:13.568 22:52:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:13.568 22:52:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:13.568 22:52:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:13.568 22:52:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:13.568 22:52:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:13.568 22:52:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:13.568 22:52:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:13.568 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:13.568 22:52:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:13.568 22:52:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:13.568 22:52:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.568 22:52:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.568 22:52:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:13.568 22:52:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:13.568 22:52:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:13.568 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:13.568 22:52:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:13.568 22:52:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:13.568 22:52:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.568 22:52:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.568 22:52:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:13.568 22:52:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:13.568 22:52:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:13.568 22:52:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:13.568 22:52:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:13.568 22:52:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.568 22:52:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:13.568 22:52:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.568 22:52:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:13.568 Found net devices under 0000:31:00.0: cvl_0_0 00:26:13.568 22:52:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.568 22:52:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:13.568 22:52:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.568 22:52:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:13.568 22:52:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.568 22:52:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:13.568 Found net devices under 0000:31:00.1: cvl_0_1 00:26:13.568 22:52:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.568 22:52:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:13.568 22:52:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:13.568 22:52:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:13.568 22:52:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:13.568 22:52:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:13.568 22:52:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.568 22:52:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.568 22:52:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.568 22:52:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:13.568 22:52:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:13.568 22:52:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:13.568 22:52:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:13.568 22:52:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:13.568 22:52:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.568 22:52:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:13.568 22:52:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:13.568 22:52:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:13.568 22:52:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:13.568 22:52:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:13.568 22:52:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.568 22:52:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:13.568 22:52:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.568 22:52:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.568 22:52:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.568 22:52:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:13.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:26:13.568 00:26:13.568 --- 10.0.0.2 ping statistics --- 00:26:13.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.568 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:26:13.568 22:52:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:26:13.568 00:26:13.568 --- 10.0.0.1 ping statistics --- 00:26:13.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.568 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:26:13.568 22:52:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.568 22:52:58 -- nvmf/common.sh@410 -- # return 0 00:26:13.568 22:52:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:13.568 22:52:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.568 22:52:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:13.568 22:52:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:13.568 22:52:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.568 22:52:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:13.568 22:52:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:13.569 22:52:58 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:13.569 22:52:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:13.569 22:52:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:13.569 22:52:58 -- common/autotest_common.sh@10 -- # set +x 00:26:13.569 22:52:58 -- nvmf/common.sh@469 -- # nvmfpid=1235035 00:26:13.569 22:52:58 -- nvmf/common.sh@470 -- # waitforlisten 1235035 00:26:13.569 22:52:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:13.569 22:52:58 -- common/autotest_common.sh@819 -- # '[' -z 1235035 ']' 00:26:13.569 22:52:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.569 22:52:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:13.569 22:52:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.569 22:52:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:13.569 22:52:58 -- common/autotest_common.sh@10 -- # set +x 00:26:13.830 [2024-04-15 22:52:58.402526] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:26:13.830 [2024-04-15 22:52:58.402617] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.830 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.830 [2024-04-15 22:52:58.487878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:13.830 [2024-04-15 22:52:58.560249] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:13.830 [2024-04-15 22:52:58.560386] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.830 [2024-04-15 22:52:58.560395] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.830 [2024-04-15 22:52:58.560403] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.830 [2024-04-15 22:52:58.560551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.830 [2024-04-15 22:52:58.560704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:13.830 [2024-04-15 22:52:58.560959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:13.830 [2024-04-15 22:52:58.560960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.400 22:52:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:14.400 22:52:59 -- common/autotest_common.sh@852 -- # return 0 00:26:14.400 22:52:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:14.400 22:52:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:14.400 22:52:59 -- common/autotest_common.sh@10 -- # set +x 00:26:14.661 22:52:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.661 22:52:59 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:14.661 22:52:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.661 22:52:59 -- common/autotest_common.sh@10 -- # set +x 00:26:14.661 [2024-04-15 22:52:59.220658] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.661 22:52:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.661 22:52:59 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:14.661 22:52:59 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:14.661 22:52:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:14.661 22:52:59 -- common/autotest_common.sh@10 -- # set +x 00:26:14.661 22:52:59 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:14.661 22:52:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.661 22:52:59 -- target/shutdown.sh@28 -- # cat 00:26:14.661 22:52:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.661 22:52:59 -- target/shutdown.sh@28 -- # cat 00:26:14.661 22:52:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.661 22:52:59 -- target/shutdown.sh@28 -- # cat 00:26:14.661 22:52:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.661 22:52:59 -- target/shutdown.sh@28 -- # cat 00:26:14.661 22:52:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.661 22:52:59 -- target/shutdown.sh@28 -- # cat 00:26:14.661 22:52:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.661 22:52:59 -- target/shutdown.sh@28 -- # cat 00:26:14.661 22:52:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.661 22:52:59 -- target/shutdown.sh@28 -- # cat 00:26:14.661 22:52:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.661 22:52:59 -- target/shutdown.sh@28 -- # cat 00:26:14.661 22:52:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.661 22:52:59 -- target/shutdown.sh@28 -- # cat 00:26:14.661 22:52:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.661 22:52:59 -- target/shutdown.sh@28 -- # cat 00:26:14.661 22:52:59 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:14.661 22:52:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.661 22:52:59 -- common/autotest_common.sh@10 -- # set +x 00:26:14.661 Malloc1 00:26:14.661 [2024-04-15 22:52:59.316961] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.661 Malloc2 00:26:14.661 Malloc3 00:26:14.661 Malloc4 00:26:14.661 Malloc5 00:26:14.921 Malloc6 00:26:14.921 Malloc7 00:26:14.921 Malloc8 00:26:14.921 Malloc9 00:26:14.921 Malloc10 00:26:14.921 22:52:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.921 22:52:59 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:14.921 22:52:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:14.921 22:52:59 -- common/autotest_common.sh@10 -- # set +x 00:26:14.921 22:52:59 -- target/shutdown.sh@102 -- # perfpid=1235374 00:26:14.921 22:52:59 -- target/shutdown.sh@103 -- # waitforlisten 1235374 /var/tmp/bdevperf.sock 00:26:14.921 22:52:59 -- common/autotest_common.sh@819 -- # '[' -z 1235374 ']' 00:26:14.921 22:52:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:14.921 22:52:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:14.921 22:52:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:14.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:14.921 22:52:59 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:14.921 22:52:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:14.921 22:52:59 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:14.921 22:52:59 -- common/autotest_common.sh@10 -- # set +x 00:26:14.921 22:52:59 -- nvmf/common.sh@520 -- # config=() 00:26:14.921 22:52:59 -- nvmf/common.sh@520 -- # local subsystem config 00:26:14.921 22:52:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:14.921 22:52:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:14.921 { 00:26:14.921 "params": { 00:26:14.921 "name": "Nvme$subsystem", 00:26:14.921 "trtype": "$TEST_TRANSPORT", 00:26:14.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.921 "adrfam": "ipv4", 00:26:14.921 "trsvcid": "$NVMF_PORT", 00:26:14.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.921 "hdgst": ${hdgst:-false}, 00:26:14.921 "ddgst": ${ddgst:-false} 00:26:14.921 }, 00:26:14.921 "method": "bdev_nvme_attach_controller" 00:26:14.921 } 00:26:14.921 EOF 00:26:14.921 )") 00:26:14.921 22:52:59 -- nvmf/common.sh@542 -- # cat 00:26:14.921 22:52:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:14.921 22:52:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:14.921 { 00:26:14.921 "params": { 00:26:14.921 "name": "Nvme$subsystem", 00:26:14.921 "trtype": "$TEST_TRANSPORT", 00:26:14.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.921 "adrfam": "ipv4", 00:26:14.921 "trsvcid": "$NVMF_PORT", 00:26:14.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.921 "hdgst": ${hdgst:-false}, 00:26:14.921 "ddgst": ${ddgst:-false} 00:26:14.921 }, 00:26:14.922 "method": "bdev_nvme_attach_controller" 00:26:14.922 } 00:26:14.922 EOF 00:26:14.922 )") 00:26:14.922 22:52:59 -- nvmf/common.sh@542 -- # cat 00:26:15.182 22:52:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.182 22:52:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.182 { 00:26:15.182 "params": { 00:26:15.182 "name": "Nvme$subsystem", 00:26:15.182 "trtype": "$TEST_TRANSPORT", 00:26:15.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.182 "adrfam": "ipv4", 00:26:15.182 "trsvcid": "$NVMF_PORT", 00:26:15.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.182 "hdgst": ${hdgst:-false}, 00:26:15.182 "ddgst": ${ddgst:-false} 00:26:15.182 }, 00:26:15.182 "method": "bdev_nvme_attach_controller" 00:26:15.182 } 00:26:15.182 EOF 00:26:15.182 )") 00:26:15.182 22:52:59 -- nvmf/common.sh@542 -- # cat 00:26:15.182 22:52:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.182 22:52:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.182 { 00:26:15.182 "params": { 00:26:15.182 "name": "Nvme$subsystem", 00:26:15.182 "trtype": "$TEST_TRANSPORT", 00:26:15.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.182 "adrfam": "ipv4", 00:26:15.182 "trsvcid": "$NVMF_PORT", 00:26:15.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.182 "hdgst": ${hdgst:-false}, 00:26:15.182 "ddgst": ${ddgst:-false} 00:26:15.182 }, 00:26:15.182 "method": "bdev_nvme_attach_controller" 00:26:15.182 } 00:26:15.182 EOF 00:26:15.182 )") 00:26:15.182 22:52:59 -- nvmf/common.sh@542 -- # cat 00:26:15.182 22:52:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.182 22:52:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.182 { 00:26:15.182 "params": { 00:26:15.182 "name": "Nvme$subsystem", 00:26:15.182 "trtype": "$TEST_TRANSPORT", 00:26:15.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.182 "adrfam": "ipv4", 00:26:15.182 "trsvcid": "$NVMF_PORT", 00:26:15.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.182 "hdgst": ${hdgst:-false}, 00:26:15.182 "ddgst": ${ddgst:-false} 00:26:15.182 }, 00:26:15.182 "method": "bdev_nvme_attach_controller" 00:26:15.182 } 00:26:15.182 EOF 00:26:15.182 )") 00:26:15.182 22:52:59 -- nvmf/common.sh@542 -- # cat 00:26:15.182 22:52:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.182 22:52:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.182 { 00:26:15.182 "params": { 00:26:15.182 "name": "Nvme$subsystem", 00:26:15.182 "trtype": "$TEST_TRANSPORT", 00:26:15.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.182 "adrfam": "ipv4", 00:26:15.182 "trsvcid": "$NVMF_PORT", 00:26:15.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.182 "hdgst": ${hdgst:-false}, 00:26:15.182 "ddgst": ${ddgst:-false} 00:26:15.182 }, 00:26:15.182 "method": "bdev_nvme_attach_controller" 00:26:15.182 } 00:26:15.182 EOF 00:26:15.182 )") 00:26:15.182 22:52:59 -- nvmf/common.sh@542 -- # cat 00:26:15.182 [2024-04-15 22:52:59.763659] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:26:15.182 [2024-04-15 22:52:59.763736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235374 ] 00:26:15.182 22:52:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.182 22:52:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.182 { 00:26:15.182 "params": { 00:26:15.182 "name": "Nvme$subsystem", 00:26:15.182 "trtype": "$TEST_TRANSPORT", 00:26:15.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.182 "adrfam": "ipv4", 00:26:15.182 "trsvcid": "$NVMF_PORT", 00:26:15.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.182 "hdgst": ${hdgst:-false}, 00:26:15.182 "ddgst": ${ddgst:-false} 00:26:15.182 }, 00:26:15.182 "method": "bdev_nvme_attach_controller" 00:26:15.182 } 00:26:15.182 EOF 00:26:15.182 )") 00:26:15.182 22:52:59 -- nvmf/common.sh@542 -- # cat 00:26:15.182 22:52:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.182 22:52:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.182 { 00:26:15.182 "params": { 00:26:15.182 "name": "Nvme$subsystem", 00:26:15.182 "trtype": "$TEST_TRANSPORT", 00:26:15.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.182 "adrfam": "ipv4", 00:26:15.182 "trsvcid": "$NVMF_PORT", 00:26:15.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.182 "hdgst": ${hdgst:-false}, 00:26:15.182 "ddgst": ${ddgst:-false} 00:26:15.182 }, 00:26:15.182 "method": "bdev_nvme_attach_controller" 00:26:15.182 } 00:26:15.182 EOF 00:26:15.182 )") 00:26:15.182 22:52:59 -- nvmf/common.sh@542 -- # cat 00:26:15.182 22:52:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.182 22:52:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.182 { 00:26:15.182 "params": { 00:26:15.182 "name": "Nvme$subsystem", 00:26:15.182 "trtype": "$TEST_TRANSPORT", 00:26:15.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.182 "adrfam": "ipv4", 00:26:15.182 "trsvcid": "$NVMF_PORT", 00:26:15.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.183 "hdgst": ${hdgst:-false}, 00:26:15.183 "ddgst": ${ddgst:-false} 00:26:15.183 }, 00:26:15.183 "method": "bdev_nvme_attach_controller" 00:26:15.183 } 00:26:15.183 EOF 00:26:15.183 )") 00:26:15.183 22:52:59 -- nvmf/common.sh@542 -- # cat 00:26:15.183 22:52:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.183 22:52:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.183 { 00:26:15.183 "params": { 00:26:15.183 "name": "Nvme$subsystem", 00:26:15.183 "trtype": "$TEST_TRANSPORT", 00:26:15.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.183 "adrfam": "ipv4", 00:26:15.183 "trsvcid": "$NVMF_PORT", 00:26:15.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.183 "hdgst": ${hdgst:-false}, 00:26:15.183 "ddgst": ${ddgst:-false} 00:26:15.183 }, 00:26:15.183 "method": "bdev_nvme_attach_controller" 00:26:15.183 } 00:26:15.183 EOF 00:26:15.183 )") 00:26:15.183 22:52:59 -- nvmf/common.sh@542 -- # cat 00:26:15.183 22:52:59 -- nvmf/common.sh@544 -- # jq . 00:26:15.183 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.183 22:52:59 -- nvmf/common.sh@545 -- # IFS=, 00:26:15.183 22:52:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:15.183 "params": { 00:26:15.183 "name": "Nvme1", 00:26:15.183 "trtype": "tcp", 00:26:15.183 "traddr": "10.0.0.2", 00:26:15.183 "adrfam": "ipv4", 00:26:15.183 "trsvcid": "4420", 00:26:15.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:15.183 "hdgst": false, 00:26:15.183 "ddgst": false 00:26:15.183 }, 00:26:15.183 "method": "bdev_nvme_attach_controller" 00:26:15.183 },{ 00:26:15.183 "params": { 00:26:15.183 "name": "Nvme2", 00:26:15.183 "trtype": "tcp", 00:26:15.183 "traddr": "10.0.0.2", 00:26:15.183 "adrfam": "ipv4", 00:26:15.183 "trsvcid": "4420", 00:26:15.183 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:15.183 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:15.183 "hdgst": false, 00:26:15.183 "ddgst": false 00:26:15.183 }, 00:26:15.183 "method": "bdev_nvme_attach_controller" 00:26:15.183 },{ 00:26:15.183 "params": { 00:26:15.183 "name": "Nvme3", 00:26:15.183 "trtype": "tcp", 00:26:15.183 "traddr": "10.0.0.2", 00:26:15.183 "adrfam": "ipv4", 00:26:15.183 "trsvcid": "4420", 00:26:15.183 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:15.183 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:15.183 "hdgst": false, 00:26:15.183 "ddgst": false 00:26:15.183 }, 00:26:15.183 "method": "bdev_nvme_attach_controller" 00:26:15.183 },{ 00:26:15.183 "params": { 00:26:15.183 "name": "Nvme4", 00:26:15.183 "trtype": "tcp", 00:26:15.183 "traddr": "10.0.0.2", 00:26:15.183 "adrfam": "ipv4", 00:26:15.183 "trsvcid": "4420", 00:26:15.183 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:15.183 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:15.183 "hdgst": false, 00:26:15.183 "ddgst": false 00:26:15.183 }, 00:26:15.183 "method": "bdev_nvme_attach_controller" 00:26:15.183 },{ 00:26:15.183 "params": { 00:26:15.183 "name": "Nvme5", 00:26:15.183 "trtype": "tcp", 00:26:15.183 "traddr": "10.0.0.2", 00:26:15.183 "adrfam": "ipv4", 00:26:15.183 "trsvcid": "4420", 00:26:15.183 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:15.183 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:15.183 "hdgst": false, 00:26:15.183 "ddgst": false 00:26:15.183 }, 00:26:15.183 "method": "bdev_nvme_attach_controller" 00:26:15.183 },{ 00:26:15.183 "params": { 00:26:15.183 "name": "Nvme6", 00:26:15.183 "trtype": "tcp", 00:26:15.183 "traddr": "10.0.0.2", 00:26:15.183 "adrfam": "ipv4", 00:26:15.183 "trsvcid": "4420", 00:26:15.183 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:15.183 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:15.183 "hdgst": false, 00:26:15.183 "ddgst": false 00:26:15.183 }, 00:26:15.183 "method": "bdev_nvme_attach_controller" 00:26:15.183 },{ 00:26:15.183 "params": { 00:26:15.183 "name": "Nvme7", 00:26:15.183 "trtype": "tcp", 00:26:15.183 "traddr": "10.0.0.2", 00:26:15.183 "adrfam": "ipv4", 00:26:15.183 "trsvcid": "4420", 00:26:15.183 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:15.183 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:15.183 "hdgst": false, 00:26:15.183 "ddgst": false 00:26:15.183 }, 00:26:15.183 "method": "bdev_nvme_attach_controller" 00:26:15.183 },{ 00:26:15.183 "params": { 00:26:15.183 "name": "Nvme8", 00:26:15.183 "trtype": "tcp", 00:26:15.183 "traddr": "10.0.0.2", 00:26:15.183 "adrfam": "ipv4", 00:26:15.183 "trsvcid": "4420", 00:26:15.183 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:15.183 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:15.183 "hdgst": false, 00:26:15.183 "ddgst": false 00:26:15.183 }, 00:26:15.183 "method": "bdev_nvme_attach_controller" 00:26:15.183 },{ 00:26:15.183 "params": { 00:26:15.183 "name": "Nvme9", 00:26:15.183 "trtype": "tcp", 00:26:15.183 "traddr": "10.0.0.2", 00:26:15.183 "adrfam": "ipv4", 00:26:15.183 "trsvcid": "4420", 00:26:15.183 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:15.183 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:15.183 "hdgst": false, 00:26:15.183 "ddgst": false 00:26:15.183 }, 00:26:15.183 "method": "bdev_nvme_attach_controller" 00:26:15.183 },{ 00:26:15.183 "params": { 00:26:15.183 "name": "Nvme10", 00:26:15.183 "trtype": "tcp", 00:26:15.183 "traddr": "10.0.0.2", 00:26:15.183 "adrfam": "ipv4", 00:26:15.183 "trsvcid": "4420", 00:26:15.183 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:15.183 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:15.183 "hdgst": false, 00:26:15.183 "ddgst": false 00:26:15.183 }, 00:26:15.183 "method": "bdev_nvme_attach_controller" 00:26:15.183 }' 00:26:15.183 [2024-04-15 22:52:59.833642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.183 [2024-04-15 22:52:59.896369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.566 Running I/O for 10 seconds... 00:26:17.136 22:53:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:17.136 22:53:01 -- common/autotest_common.sh@852 -- # return 0 00:26:17.136 22:53:01 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:17.136 22:53:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:17.136 22:53:01 -- common/autotest_common.sh@10 -- # set +x 00:26:17.136 22:53:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:17.136 22:53:01 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:17.136 22:53:01 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:17.136 22:53:01 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:17.137 22:53:01 -- target/shutdown.sh@57 -- # local ret=1 00:26:17.137 22:53:01 -- target/shutdown.sh@58 -- # local i 00:26:17.137 22:53:01 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:17.137 22:53:01 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:17.137 22:53:01 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:17.137 22:53:01 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:17.137 22:53:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:17.137 22:53:01 -- common/autotest_common.sh@10 -- # set +x 00:26:17.137 22:53:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:17.137 22:53:01 -- target/shutdown.sh@60 -- # read_io_count=211 00:26:17.137 22:53:01 -- target/shutdown.sh@63 -- # '[' 211 -ge 100 ']' 00:26:17.137 22:53:01 -- target/shutdown.sh@64 -- # ret=0 00:26:17.137 22:53:01 -- target/shutdown.sh@65 -- # break 00:26:17.137 22:53:01 -- target/shutdown.sh@69 -- # return 0 00:26:17.137 22:53:01 -- target/shutdown.sh@109 -- # killprocess 1235374 00:26:17.137 22:53:01 -- common/autotest_common.sh@926 -- # '[' -z 1235374 ']' 00:26:17.137 22:53:01 -- common/autotest_common.sh@930 -- # kill -0 1235374 00:26:17.137 22:53:01 -- common/autotest_common.sh@931 -- # uname 00:26:17.137 22:53:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:17.137 22:53:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1235374 00:26:17.397 22:53:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:17.397 22:53:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:17.397 22:53:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1235374' 00:26:17.397 killing process with pid 1235374 00:26:17.397 22:53:01 -- common/autotest_common.sh@945 -- # kill 1235374 00:26:17.397 22:53:01 -- common/autotest_common.sh@950 -- # wait 1235374 00:26:17.397 Received shutdown signal, test time was about 0.813713 seconds 00:26:17.397 00:26:17.397 Latency(us) 00:26:17.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.397 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.397 Verification LBA range: start 0x0 length 0x400 00:26:17.397 Nvme1n1 : 0.77 409.14 25.57 0.00 0.00 152805.68 19333.12 162529.28 00:26:17.397 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.397 Verification LBA range: start 0x0 length 0x400 00:26:17.397 Nvme2n1 : 0.76 421.06 26.32 0.00 0.00 147040.62 8683.52 145926.83 00:26:17.397 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.397 Verification LBA range: start 0x0 length 0x400 00:26:17.397 Nvme3n1 : 0.79 451.34 28.21 0.00 0.00 135881.74 12779.52 115343.36 00:26:17.397 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.397 Verification LBA range: start 0x0 length 0x400 00:26:17.397 Nvme4n1 : 0.81 387.54 24.22 0.00 0.00 148548.96 20753.07 124081.49 00:26:17.397 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.397 Verification LBA range: start 0x0 length 0x400 00:26:17.397 Nvme5n1 : 0.79 452.16 28.26 0.00 0.00 132356.03 12888.75 113595.73 00:26:17.397 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.397 Verification LBA range: start 0x0 length 0x400 00:26:17.397 Nvme6n1 : 0.76 412.11 25.76 0.00 0.00 142107.71 20097.71 117964.80 00:26:17.397 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.397 Verification LBA range: start 0x0 length 0x400 00:26:17.397 Nvme7n1 : 0.79 450.41 28.15 0.00 0.00 129556.29 12670.29 107042.13 00:26:17.397 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.397 Verification LBA range: start 0x0 length 0x400 00:26:17.397 Nvme8n1 : 0.78 403.96 25.25 0.00 0.00 142002.27 9338.88 122333.87 00:26:17.397 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.397 Verification LBA range: start 0x0 length 0x400 00:26:17.397 Nvme9n1 : 0.78 409.28 25.58 0.00 0.00 137904.51 3768.32 125829.12 00:26:17.397 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.397 Verification LBA range: start 0x0 length 0x400 00:26:17.397 Nvme10n1 : 0.78 404.58 25.29 0.00 0.00 137528.51 21189.97 109663.57 00:26:17.397 =================================================================================================================== 00:26:17.397 Total : 4201.58 262.60 0.00 0.00 140291.28 3768.32 162529.28 00:26:17.397 22:53:02 -- target/shutdown.sh@112 -- # sleep 1 00:26:18.781 22:53:03 -- target/shutdown.sh@113 -- # kill -0 1235035 00:26:18.781 22:53:03 -- target/shutdown.sh@115 -- # stoptarget 00:26:18.781 22:53:03 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:18.781 22:53:03 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:18.781 22:53:03 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:18.781 22:53:03 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:18.781 22:53:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:18.781 22:53:03 -- nvmf/common.sh@116 -- # sync 00:26:18.781 22:53:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:18.781 22:53:03 -- nvmf/common.sh@119 -- # set +e 00:26:18.781 22:53:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:18.781 22:53:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:18.781 rmmod nvme_tcp 00:26:18.781 rmmod nvme_fabrics 00:26:18.781 rmmod nvme_keyring 00:26:18.781 22:53:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:18.781 22:53:03 -- nvmf/common.sh@123 -- # set -e 00:26:18.781 22:53:03 -- nvmf/common.sh@124 -- # return 0 00:26:18.781 22:53:03 -- nvmf/common.sh@477 -- # '[' -n 1235035 ']' 00:26:18.781 22:53:03 -- nvmf/common.sh@478 -- # killprocess 1235035 00:26:18.781 22:53:03 -- common/autotest_common.sh@926 -- # '[' -z 1235035 ']' 00:26:18.781 22:53:03 -- common/autotest_common.sh@930 -- # kill -0 1235035 00:26:18.781 22:53:03 -- common/autotest_common.sh@931 -- # uname 00:26:18.781 22:53:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:18.781 22:53:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1235035 00:26:18.781 22:53:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:18.781 22:53:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:18.781 22:53:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1235035' 00:26:18.781 killing process with pid 1235035 00:26:18.781 22:53:03 -- common/autotest_common.sh@945 -- # kill 1235035 00:26:18.781 22:53:03 -- common/autotest_common.sh@950 -- # wait 1235035 00:26:19.043 22:53:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:19.043 22:53:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:19.043 22:53:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:19.043 22:53:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:19.043 22:53:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:19.043 22:53:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.043 22:53:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.043 22:53:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.957 22:53:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:20.957 00:26:20.957 real 0m7.738s 00:26:20.957 user 0m22.787s 00:26:20.957 sys 0m1.314s 00:26:20.957 22:53:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:20.957 22:53:05 -- common/autotest_common.sh@10 -- # set +x 00:26:20.957 ************************************ 00:26:20.957 END TEST nvmf_shutdown_tc2 00:26:20.957 ************************************ 00:26:20.957 22:53:05 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:20.957 22:53:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:20.957 22:53:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:20.957 22:53:05 -- common/autotest_common.sh@10 -- # set +x 00:26:20.957 ************************************ 00:26:20.957 START TEST nvmf_shutdown_tc3 00:26:20.957 ************************************ 00:26:20.957 22:53:05 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:26:20.958 22:53:05 -- target/shutdown.sh@120 -- # starttarget 00:26:20.958 22:53:05 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:20.958 22:53:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:20.958 22:53:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.958 22:53:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:20.958 22:53:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:20.958 22:53:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:20.958 22:53:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.958 22:53:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:20.958 22:53:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.958 22:53:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:20.958 22:53:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:20.958 22:53:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:20.958 22:53:05 -- common/autotest_common.sh@10 -- # set +x 00:26:20.958 22:53:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:20.958 22:53:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:20.958 22:53:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:20.958 22:53:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:20.958 22:53:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:20.958 22:53:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:20.958 22:53:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:20.958 22:53:05 -- nvmf/common.sh@294 -- # net_devs=() 00:26:20.958 22:53:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:20.958 22:53:05 -- nvmf/common.sh@295 -- # e810=() 00:26:20.958 22:53:05 -- nvmf/common.sh@295 -- # local -ga e810 00:26:20.958 22:53:05 -- nvmf/common.sh@296 -- # x722=() 00:26:20.958 22:53:05 -- nvmf/common.sh@296 -- # local -ga x722 00:26:20.958 22:53:05 -- nvmf/common.sh@297 -- # mlx=() 00:26:20.958 22:53:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:20.958 22:53:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:20.958 22:53:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:20.958 22:53:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:20.958 22:53:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:20.958 22:53:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:20.958 22:53:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:20.958 22:53:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:20.958 22:53:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:20.958 22:53:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:20.958 22:53:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:20.958 22:53:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:20.958 22:53:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:20.958 22:53:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:20.958 22:53:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:20.958 22:53:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:20.958 22:53:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:20.958 22:53:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:20.958 22:53:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:20.958 22:53:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:20.958 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:20.958 22:53:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:20.958 22:53:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:20.958 22:53:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.958 22:53:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.958 22:53:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:20.958 22:53:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:20.958 22:53:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:20.958 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:20.958 22:53:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:20.958 22:53:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:20.958 22:53:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.958 22:53:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.958 22:53:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:20.958 22:53:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:20.958 22:53:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:20.958 22:53:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:20.958 22:53:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:20.958 22:53:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.958 22:53:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:20.958 22:53:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.958 22:53:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:20.958 Found net devices under 0000:31:00.0: cvl_0_0 00:26:20.958 22:53:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.958 22:53:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:20.958 22:53:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.958 22:53:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:20.958 22:53:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.958 22:53:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:20.958 Found net devices under 0000:31:00.1: cvl_0_1 00:26:20.958 22:53:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.958 22:53:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:20.958 22:53:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:20.958 22:53:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:20.958 22:53:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:20.958 22:53:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:20.958 22:53:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.958 22:53:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:20.958 22:53:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:20.958 22:53:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:20.958 22:53:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:20.958 22:53:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:20.958 22:53:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:20.958 22:53:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:20.958 22:53:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.958 22:53:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:20.958 22:53:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:20.958 22:53:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:20.958 22:53:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:21.219 22:53:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:21.219 22:53:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:21.219 22:53:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:21.219 22:53:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:21.481 22:53:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:21.481 22:53:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:21.481 22:53:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:21.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:21.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:26:21.481 00:26:21.481 --- 10.0.0.2 ping statistics --- 00:26:21.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.481 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:26:21.481 22:53:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:21.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:21.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:26:21.481 00:26:21.481 --- 10.0.0.1 ping statistics --- 00:26:21.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.481 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:26:21.481 22:53:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:21.481 22:53:06 -- nvmf/common.sh@410 -- # return 0 00:26:21.481 22:53:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:21.481 22:53:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:21.481 22:53:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:21.481 22:53:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:21.481 22:53:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:21.481 22:53:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:21.481 22:53:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:21.481 22:53:06 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:21.481 22:53:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:21.481 22:53:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:21.481 22:53:06 -- common/autotest_common.sh@10 -- # set +x 00:26:21.481 22:53:06 -- nvmf/common.sh@469 -- # nvmfpid=1236842 00:26:21.481 22:53:06 -- nvmf/common.sh@470 -- # waitforlisten 1236842 00:26:21.481 22:53:06 -- common/autotest_common.sh@819 -- # '[' -z 1236842 ']' 00:26:21.481 22:53:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:21.481 22:53:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.481 22:53:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:21.481 22:53:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.481 22:53:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:21.481 22:53:06 -- common/autotest_common.sh@10 -- # set +x 00:26:21.481 [2024-04-15 22:53:06.188354] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:26:21.481 [2024-04-15 22:53:06.188418] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.481 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.481 [2024-04-15 22:53:06.265933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:21.743 [2024-04-15 22:53:06.337977] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:21.743 [2024-04-15 22:53:06.338112] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.743 [2024-04-15 22:53:06.338126] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.743 [2024-04-15 22:53:06.338134] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.743 [2024-04-15 22:53:06.338252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:21.743 [2024-04-15 22:53:06.338408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:21.743 [2024-04-15 22:53:06.338581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.743 [2024-04-15 22:53:06.338582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:22.314 22:53:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:22.314 22:53:06 -- common/autotest_common.sh@852 -- # return 0 00:26:22.314 22:53:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:22.314 22:53:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:22.314 22:53:06 -- common/autotest_common.sh@10 -- # set +x 00:26:22.314 22:53:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.314 22:53:07 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:22.314 22:53:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.314 22:53:07 -- common/autotest_common.sh@10 -- # set +x 00:26:22.314 [2024-04-15 22:53:07.009731] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.314 22:53:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.314 22:53:07 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:22.314 22:53:07 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:22.314 22:53:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:22.314 22:53:07 -- common/autotest_common.sh@10 -- # set +x 00:26:22.314 22:53:07 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:22.314 22:53:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:22.314 22:53:07 -- target/shutdown.sh@28 -- # cat 00:26:22.314 22:53:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:22.314 22:53:07 -- target/shutdown.sh@28 -- # cat 00:26:22.314 22:53:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:22.314 22:53:07 -- target/shutdown.sh@28 -- # cat 00:26:22.314 22:53:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:22.314 22:53:07 -- target/shutdown.sh@28 -- # cat 00:26:22.314 22:53:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:22.314 22:53:07 -- target/shutdown.sh@28 -- # cat 00:26:22.314 22:53:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:22.314 22:53:07 -- target/shutdown.sh@28 -- # cat 00:26:22.314 22:53:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:22.314 22:53:07 -- target/shutdown.sh@28 -- # cat 00:26:22.314 22:53:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:22.314 22:53:07 -- target/shutdown.sh@28 -- # cat 00:26:22.314 22:53:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:22.314 22:53:07 -- target/shutdown.sh@28 -- # cat 00:26:22.314 22:53:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:22.314 22:53:07 -- target/shutdown.sh@28 -- # cat 00:26:22.314 22:53:07 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:22.314 22:53:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.314 22:53:07 -- common/autotest_common.sh@10 -- # set +x 00:26:22.314 Malloc1 00:26:22.314 [2024-04-15 22:53:07.110043] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.578 Malloc2 00:26:22.578 Malloc3 00:26:22.579 Malloc4 00:26:22.579 Malloc5 00:26:22.579 Malloc6 00:26:22.579 Malloc7 00:26:22.579 Malloc8 00:26:22.841 Malloc9 00:26:22.841 Malloc10 00:26:22.841 22:53:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.841 22:53:07 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:22.841 22:53:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:22.841 22:53:07 -- common/autotest_common.sh@10 -- # set +x 00:26:22.841 22:53:07 -- target/shutdown.sh@124 -- # perfpid=1237138 00:26:22.841 22:53:07 -- target/shutdown.sh@125 -- # waitforlisten 1237138 /var/tmp/bdevperf.sock 00:26:22.841 22:53:07 -- common/autotest_common.sh@819 -- # '[' -z 1237138 ']' 00:26:22.841 22:53:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:22.841 22:53:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:22.841 22:53:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:22.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:22.841 22:53:07 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:22.841 22:53:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:22.841 22:53:07 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:22.841 22:53:07 -- common/autotest_common.sh@10 -- # set +x 00:26:22.841 22:53:07 -- nvmf/common.sh@520 -- # config=() 00:26:22.841 22:53:07 -- nvmf/common.sh@520 -- # local subsystem config 00:26:22.841 22:53:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:22.841 22:53:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:22.841 { 00:26:22.841 "params": { 00:26:22.841 "name": "Nvme$subsystem", 00:26:22.841 "trtype": "$TEST_TRANSPORT", 00:26:22.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.841 "adrfam": "ipv4", 00:26:22.841 "trsvcid": "$NVMF_PORT", 00:26:22.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.841 "hdgst": ${hdgst:-false}, 00:26:22.841 "ddgst": ${ddgst:-false} 00:26:22.841 }, 00:26:22.841 "method": "bdev_nvme_attach_controller" 00:26:22.841 } 00:26:22.841 EOF 00:26:22.841 )") 00:26:22.841 22:53:07 -- nvmf/common.sh@542 -- # cat 00:26:22.841 22:53:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:22.841 22:53:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:22.841 { 00:26:22.841 "params": { 00:26:22.841 "name": "Nvme$subsystem", 00:26:22.841 "trtype": "$TEST_TRANSPORT", 00:26:22.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.841 "adrfam": "ipv4", 00:26:22.841 "trsvcid": "$NVMF_PORT", 00:26:22.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.841 "hdgst": ${hdgst:-false}, 00:26:22.841 "ddgst": ${ddgst:-false} 00:26:22.841 }, 00:26:22.841 "method": "bdev_nvme_attach_controller" 00:26:22.841 } 00:26:22.841 EOF 00:26:22.841 )") 00:26:22.841 22:53:07 -- nvmf/common.sh@542 -- # cat 00:26:22.841 22:53:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:22.841 22:53:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:22.841 { 00:26:22.841 "params": { 00:26:22.841 "name": "Nvme$subsystem", 00:26:22.841 "trtype": "$TEST_TRANSPORT", 00:26:22.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.841 "adrfam": "ipv4", 00:26:22.841 "trsvcid": "$NVMF_PORT", 00:26:22.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.841 "hdgst": ${hdgst:-false}, 00:26:22.841 "ddgst": ${ddgst:-false} 00:26:22.841 }, 00:26:22.841 "method": "bdev_nvme_attach_controller" 00:26:22.841 } 00:26:22.841 EOF 00:26:22.841 )") 00:26:22.841 22:53:07 -- nvmf/common.sh@542 -- # cat 00:26:22.841 22:53:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:22.841 22:53:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:22.841 { 00:26:22.841 "params": { 00:26:22.841 "name": "Nvme$subsystem", 00:26:22.841 "trtype": "$TEST_TRANSPORT", 00:26:22.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.841 "adrfam": "ipv4", 00:26:22.841 "trsvcid": "$NVMF_PORT", 00:26:22.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.841 "hdgst": ${hdgst:-false}, 00:26:22.841 "ddgst": ${ddgst:-false} 00:26:22.841 }, 00:26:22.841 "method": "bdev_nvme_attach_controller" 00:26:22.841 } 00:26:22.841 EOF 00:26:22.841 )") 00:26:22.841 22:53:07 -- nvmf/common.sh@542 -- # cat 00:26:22.841 22:53:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:22.841 22:53:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:22.841 { 00:26:22.841 "params": { 00:26:22.841 "name": "Nvme$subsystem", 00:26:22.841 "trtype": "$TEST_TRANSPORT", 00:26:22.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.841 "adrfam": "ipv4", 00:26:22.841 "trsvcid": "$NVMF_PORT", 00:26:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.842 "hdgst": ${hdgst:-false}, 00:26:22.842 "ddgst": ${ddgst:-false} 00:26:22.842 }, 00:26:22.842 "method": "bdev_nvme_attach_controller" 00:26:22.842 } 00:26:22.842 EOF 00:26:22.842 )") 00:26:22.842 22:53:07 -- nvmf/common.sh@542 -- # cat 00:26:22.842 22:53:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:22.842 22:53:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:22.842 { 00:26:22.842 "params": { 00:26:22.842 "name": "Nvme$subsystem", 00:26:22.842 "trtype": "$TEST_TRANSPORT", 00:26:22.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.842 "adrfam": "ipv4", 00:26:22.842 "trsvcid": "$NVMF_PORT", 00:26:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.842 "hdgst": ${hdgst:-false}, 00:26:22.842 "ddgst": ${ddgst:-false} 00:26:22.842 }, 00:26:22.842 "method": "bdev_nvme_attach_controller" 00:26:22.842 } 00:26:22.842 EOF 00:26:22.842 )") 00:26:22.842 22:53:07 -- nvmf/common.sh@542 -- # cat 00:26:22.842 [2024-04-15 22:53:07.561897] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:26:22.842 [2024-04-15 22:53:07.561953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237138 ] 00:26:22.842 22:53:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:22.842 22:53:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:22.842 { 00:26:22.842 "params": { 00:26:22.842 "name": "Nvme$subsystem", 00:26:22.842 "trtype": "$TEST_TRANSPORT", 00:26:22.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.842 "adrfam": "ipv4", 00:26:22.842 "trsvcid": "$NVMF_PORT", 00:26:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.842 "hdgst": ${hdgst:-false}, 00:26:22.842 "ddgst": ${ddgst:-false} 00:26:22.842 }, 00:26:22.842 "method": "bdev_nvme_attach_controller" 00:26:22.842 } 00:26:22.842 EOF 00:26:22.842 )") 00:26:22.842 22:53:07 -- nvmf/common.sh@542 -- # cat 00:26:22.842 22:53:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:22.842 22:53:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:22.842 { 00:26:22.842 "params": { 00:26:22.842 "name": "Nvme$subsystem", 00:26:22.842 "trtype": "$TEST_TRANSPORT", 00:26:22.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.842 "adrfam": "ipv4", 00:26:22.842 "trsvcid": "$NVMF_PORT", 00:26:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.842 "hdgst": ${hdgst:-false}, 00:26:22.842 "ddgst": ${ddgst:-false} 00:26:22.842 }, 00:26:22.842 "method": "bdev_nvme_attach_controller" 00:26:22.842 } 00:26:22.842 EOF 00:26:22.842 )") 00:26:22.842 22:53:07 -- nvmf/common.sh@542 -- # cat 00:26:22.842 22:53:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:22.842 22:53:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:22.842 { 00:26:22.842 "params": { 00:26:22.842 "name": "Nvme$subsystem", 00:26:22.842 "trtype": "$TEST_TRANSPORT", 00:26:22.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.842 "adrfam": "ipv4", 00:26:22.842 "trsvcid": "$NVMF_PORT", 00:26:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.842 "hdgst": ${hdgst:-false}, 00:26:22.842 "ddgst": ${ddgst:-false} 00:26:22.842 }, 00:26:22.842 "method": "bdev_nvme_attach_controller" 00:26:22.842 } 00:26:22.842 EOF 00:26:22.842 )") 00:26:22.842 22:53:07 -- nvmf/common.sh@542 -- # cat 00:26:22.842 22:53:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:22.842 22:53:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:22.842 { 00:26:22.842 "params": { 00:26:22.842 "name": "Nvme$subsystem", 00:26:22.842 "trtype": "$TEST_TRANSPORT", 00:26:22.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.842 "adrfam": "ipv4", 00:26:22.842 "trsvcid": "$NVMF_PORT", 00:26:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.842 "hdgst": ${hdgst:-false}, 00:26:22.842 "ddgst": ${ddgst:-false} 00:26:22.842 }, 00:26:22.842 "method": "bdev_nvme_attach_controller" 00:26:22.842 } 00:26:22.842 EOF 00:26:22.842 )") 00:26:22.842 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.842 22:53:07 -- nvmf/common.sh@542 -- # cat 00:26:22.842 22:53:07 -- nvmf/common.sh@544 -- # jq . 00:26:22.842 22:53:07 -- nvmf/common.sh@545 -- # IFS=, 00:26:22.842 22:53:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:22.842 "params": { 00:26:22.842 "name": "Nvme1", 00:26:22.842 "trtype": "tcp", 00:26:22.842 "traddr": "10.0.0.2", 00:26:22.842 "adrfam": "ipv4", 00:26:22.842 "trsvcid": "4420", 00:26:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:22.842 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:22.842 "hdgst": false, 00:26:22.842 "ddgst": false 00:26:22.842 }, 00:26:22.842 "method": "bdev_nvme_attach_controller" 00:26:22.842 },{ 00:26:22.842 "params": { 00:26:22.842 "name": "Nvme2", 00:26:22.842 "trtype": "tcp", 00:26:22.842 "traddr": "10.0.0.2", 00:26:22.842 "adrfam": "ipv4", 00:26:22.842 "trsvcid": "4420", 00:26:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:22.842 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:22.842 "hdgst": false, 00:26:22.842 "ddgst": false 00:26:22.842 }, 00:26:22.842 "method": "bdev_nvme_attach_controller" 00:26:22.842 },{ 00:26:22.842 "params": { 00:26:22.842 "name": "Nvme3", 00:26:22.842 "trtype": "tcp", 00:26:22.842 "traddr": "10.0.0.2", 00:26:22.842 "adrfam": "ipv4", 00:26:22.842 "trsvcid": "4420", 00:26:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:22.842 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:22.842 "hdgst": false, 00:26:22.842 "ddgst": false 00:26:22.842 }, 00:26:22.842 "method": "bdev_nvme_attach_controller" 00:26:22.842 },{ 00:26:22.842 "params": { 00:26:22.842 "name": "Nvme4", 00:26:22.842 "trtype": "tcp", 00:26:22.842 "traddr": "10.0.0.2", 00:26:22.842 "adrfam": "ipv4", 00:26:22.842 "trsvcid": "4420", 00:26:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:22.842 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:22.842 "hdgst": false, 00:26:22.842 "ddgst": false 00:26:22.842 }, 00:26:22.842 "method": "bdev_nvme_attach_controller" 00:26:22.842 },{ 00:26:22.842 "params": { 00:26:22.842 "name": "Nvme5", 00:26:22.842 "trtype": "tcp", 00:26:22.842 "traddr": "10.0.0.2", 00:26:22.842 "adrfam": "ipv4", 00:26:22.842 "trsvcid": "4420", 00:26:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:22.842 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:22.842 "hdgst": false, 00:26:22.842 "ddgst": false 00:26:22.842 }, 00:26:22.842 "method": "bdev_nvme_attach_controller" 00:26:22.842 },{ 00:26:22.842 "params": { 00:26:22.842 "name": "Nvme6", 00:26:22.842 "trtype": "tcp", 00:26:22.842 "traddr": "10.0.0.2", 00:26:22.842 "adrfam": "ipv4", 00:26:22.842 "trsvcid": "4420", 00:26:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:22.842 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:22.842 "hdgst": false, 00:26:22.842 "ddgst": false 00:26:22.842 }, 00:26:22.842 "method": "bdev_nvme_attach_controller" 00:26:22.842 },{ 00:26:22.842 "params": { 00:26:22.842 "name": "Nvme7", 00:26:22.842 "trtype": "tcp", 00:26:22.842 "traddr": "10.0.0.2", 00:26:22.842 "adrfam": "ipv4", 00:26:22.842 "trsvcid": "4420", 00:26:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:22.842 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:22.842 "hdgst": false, 00:26:22.842 "ddgst": false 00:26:22.842 }, 00:26:22.842 "method": "bdev_nvme_attach_controller" 00:26:22.842 },{ 00:26:22.842 "params": { 00:26:22.842 "name": "Nvme8", 00:26:22.842 "trtype": "tcp", 00:26:22.842 "traddr": "10.0.0.2", 00:26:22.842 "adrfam": "ipv4", 00:26:22.842 "trsvcid": "4420", 00:26:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:22.842 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:22.842 "hdgst": false, 00:26:22.842 "ddgst": false 00:26:22.842 }, 00:26:22.842 "method": "bdev_nvme_attach_controller" 00:26:22.842 },{ 00:26:22.842 "params": { 00:26:22.842 "name": "Nvme9", 00:26:22.842 "trtype": "tcp", 00:26:22.842 "traddr": "10.0.0.2", 00:26:22.842 "adrfam": "ipv4", 00:26:22.842 "trsvcid": "4420", 00:26:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:22.842 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:22.842 "hdgst": false, 00:26:22.842 "ddgst": false 00:26:22.842 }, 00:26:22.842 "method": "bdev_nvme_attach_controller" 00:26:22.842 },{ 00:26:22.842 "params": { 00:26:22.842 "name": "Nvme10", 00:26:22.842 "trtype": "tcp", 00:26:22.842 "traddr": "10.0.0.2", 00:26:22.842 "adrfam": "ipv4", 00:26:22.842 "trsvcid": "4420", 00:26:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:22.842 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:22.842 "hdgst": false, 00:26:22.842 "ddgst": false 00:26:22.842 }, 00:26:22.842 "method": "bdev_nvme_attach_controller" 00:26:22.842 }' 00:26:22.842 [2024-04-15 22:53:07.628503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.104 [2024-04-15 22:53:07.691385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.491 Running I/O for 10 seconds... 00:26:25.077 22:53:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:25.077 22:53:09 -- common/autotest_common.sh@852 -- # return 0 00:26:25.077 22:53:09 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:25.077 22:53:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:25.077 22:53:09 -- common/autotest_common.sh@10 -- # set +x 00:26:25.077 22:53:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:25.077 22:53:09 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:25.077 22:53:09 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:25.077 22:53:09 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:25.077 22:53:09 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:25.077 22:53:09 -- target/shutdown.sh@57 -- # local ret=1 00:26:25.077 22:53:09 -- target/shutdown.sh@58 -- # local i 00:26:25.077 22:53:09 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:25.077 22:53:09 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:25.077 22:53:09 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:25.077 22:53:09 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:25.077 22:53:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:25.077 22:53:09 -- common/autotest_common.sh@10 -- # set +x 00:26:25.077 22:53:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:25.077 22:53:09 -- target/shutdown.sh@60 -- # read_io_count=167 00:26:25.077 22:53:09 -- target/shutdown.sh@63 -- # '[' 167 -ge 100 ']' 00:26:25.077 22:53:09 -- target/shutdown.sh@64 -- # ret=0 00:26:25.077 22:53:09 -- target/shutdown.sh@65 -- # break 00:26:25.077 22:53:09 -- target/shutdown.sh@69 -- # return 0 00:26:25.077 22:53:09 -- target/shutdown.sh@134 -- # killprocess 1236842 00:26:25.077 22:53:09 -- common/autotest_common.sh@926 -- # '[' -z 1236842 ']' 00:26:25.077 22:53:09 -- common/autotest_common.sh@930 -- # kill -0 1236842 00:26:25.077 22:53:09 -- common/autotest_common.sh@931 -- # uname 00:26:25.077 22:53:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:25.077 22:53:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1236842 00:26:25.077 22:53:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:25.077 22:53:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:25.077 22:53:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1236842' 00:26:25.077 killing process with pid 1236842 00:26:25.077 22:53:09 -- common/autotest_common.sh@945 -- # kill 1236842 00:26:25.077 22:53:09 -- common/autotest_common.sh@950 -- # wait 1236842 00:26:25.077 [2024-04-15 22:53:09.751722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.077 [2024-04-15 22:53:09.751957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.751961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.751965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.751970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.751974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.751979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.751985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.751989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.751994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.751998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.752003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.752007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.752011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.752017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.752022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.752026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.752031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.752036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.752040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.752045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.752050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.752054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6900 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.078 [2024-04-15 22:53:09.753357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.753361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.753366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9160 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.754496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6d90 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.755344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7240 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.755365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7240 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.755904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.755931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.755940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.755947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.755953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.755960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.755967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.755973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.755980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.755986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.755992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.755999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.756006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.756012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.756019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.079 [2024-04-15 22:53:09.756025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.756340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a76d0 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.080 [2024-04-15 22:53:09.757246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.757373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7b80 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.081 [2024-04-15 22:53:09.758246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.758353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8030 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.759495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.767501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.082 [2024-04-15 22:53:09.767535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.082 [2024-04-15 22:53:09.767552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.082 [2024-04-15 22:53:09.767559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.082 [2024-04-15 22:53:09.767568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.082 [2024-04-15 22:53:09.767575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.082 [2024-04-15 22:53:09.767587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.082 [2024-04-15 22:53:09.767595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.082 [2024-04-15 22:53:09.767603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4f30 is same with the state(5) to be set 00:26:25.082 [2024-04-15 22:53:09.767632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.082 [2024-04-15 22:53:09.767641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.082 [2024-04-15 22:53:09.767649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.082 [2024-04-15 22:53:09.767656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.767665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.767672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.767680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.767688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.767694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a84520 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.767715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.767724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.767732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.767740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.767748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.767755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.767763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.767770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.767777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4a730 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.767799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.767807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.767815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.767822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.767830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.767839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.767848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.767855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.767863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b42b70 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.767885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.767894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.767902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.767909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.767917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.767924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.767933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.767940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.767947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f260 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.767967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.767976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.767984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.767991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.767999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.768006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.768014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.768021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.768028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5360 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.768050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.768058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.768066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.768075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.768083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.768090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.768098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.768105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.768112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf84d0 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.768135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.768143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.768152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.768159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.768167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.768174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.768182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.083 [2024-04-15 22:53:09.768190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.768197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b424d0 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.768853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.768874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.768881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.768887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.768893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.768898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.768903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.768908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.768913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.768917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.768922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.768935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.768940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.768945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.768950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.768955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84c0 is same with the state(5) to be set 00:26:25.083 [2024-04-15 22:53:09.769203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-04-15 22:53:09.769222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.769239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-04-15 22:53:09.769247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.083 [2024-04-15 22:53:09.769256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-04-15 22:53:09.769263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with [2024-04-15 22:53:09.769654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36864 len:1the state(5) to be set 00:26:25.084 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.084 [2024-04-15 22:53:09.769675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with [2024-04-15 22:53:09.769675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36992 len:12the state(5) to be set 00:26:25.084 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.084 [2024-04-15 22:53:09.769685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.084 [2024-04-15 22:53:09.769696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.084 [2024-04-15 22:53:09.769704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.084 [2024-04-15 22:53:09.769714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.084 [2024-04-15 22:53:09.769715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.084 [2024-04-15 22:53:09.769724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.084 [2024-04-15 22:53:09.769734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with [2024-04-15 22:53:09.769735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37376 len:1the state(5) to be set 00:26:25.084 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.084 [2024-04-15 22:53:09.769745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.084 [2024-04-15 22:53:09.769758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.084 [2024-04-15 22:53:09.769766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-15 22:53:09.769767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 the state(5) to be set 00:26:25.084 [2024-04-15 22:53:09.769777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.084 [2024-04-15 22:53:09.769780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.084 [2024-04-15 22:53:09.769788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.084 [2024-04-15 22:53:09.769798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with [2024-04-15 22:53:09.769797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37760 len:1the state(5) to be set 00:26:25.084 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-04-15 22:53:09.769807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.084 [2024-04-15 22:53:09.769809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-04-15 22:53:09.769815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-04-15 22:53:09.769822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 [2024-04-15 22:53:09.769829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-04-15 22:53:09.769844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 [2024-04-15 22:53:09.769852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-04-15 22:53:09.769859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 [2024-04-15 22:53:09.769870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-04-15 22:53:09.769878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 [2024-04-15 22:53:09.769885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30720 len:12[2024-04-15 22:53:09.769892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-15 22:53:09.769903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-04-15 22:53:09.769919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 [2024-04-15 22:53:09.769926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-04-15 22:53:09.769934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 [2024-04-15 22:53:09.769941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-04-15 22:53:09.769957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 [2024-04-15 22:53:09.769964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-04-15 22:53:09.769971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 [2024-04-15 22:53:09.769980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-04-15 22:53:09.769988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.769994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 [2024-04-15 22:53:09.769996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-04-15 22:53:09.770010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 [2024-04-15 22:53:09.770017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-04-15 22:53:09.770024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 [2024-04-15 22:53:09.770032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-04-15 22:53:09.770047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 [2024-04-15 22:53:09.770055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-04-15 22:53:09.770063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 [2024-04-15 22:53:09.770070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32896 len:12[2024-04-15 22:53:09.770077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-15 22:53:09.770088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-04-15 22:53:09.770105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 [2024-04-15 22:53:09.770113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-04-15 22:53:09.770120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 [2024-04-15 22:53:09.770128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-04-15 22:53:09.770142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8820 is same with the state(5) to be set 00:26:25.085 [2024-04-15 22:53:09.770145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 [2024-04-15 22:53:09.770155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-04-15 22:53:09.770162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 [2024-04-15 22:53:09.770172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-04-15 22:53:09.770180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.085 [2024-04-15 22:53:09.770189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-04-15 22:53:09.770196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.086 [2024-04-15 22:53:09.770205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-04-15 22:53:09.770213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.086 [2024-04-15 22:53:09.770222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-04-15 22:53:09.770229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.086 [2024-04-15 22:53:09.770239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-04-15 22:53:09.770247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.086 [2024-04-15 22:53:09.770256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-04-15 22:53:09.770263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.086 [2024-04-15 22:53:09.770272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-04-15 22:53:09.770280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.086 [2024-04-15 22:53:09.770289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-04-15 22:53:09.770296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.086 [2024-04-15 22:53:09.770305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-04-15 22:53:09.770313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.086 [2024-04-15 22:53:09.770322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-04-15 22:53:09.770329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.086 [2024-04-15 22:53:09.770339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-04-15 22:53:09.770346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.086 [2024-04-15 22:53:09.770355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-04-15 22:53:09.770362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.086 [2024-04-15 22:53:09.770371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf240 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770413] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bcf240 was disconnected and freed. reset controller. 00:26:25.086 [2024-04-15 22:53:09.770723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34816 len:12[2024-04-15 22:53:09.770753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.086 [2024-04-15 22:53:09.770766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-04-15 22:53:09.770779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.086 [2024-04-15 22:53:09.770789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with [2024-04-15 22:53:09.770795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128the state(5) to be set 00:26:25.086 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-04-15 22:53:09.770803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.086 [2024-04-15 22:53:09.770808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-04-15 22:53:09.770818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.086 [2024-04-15 22:53:09.770823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-04-15 22:53:09.770835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.086 [2024-04-15 22:53:09.770846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770851] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with [2024-04-15 22:53:09.770851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:12the state(5) to be set 00:26:25.086 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-04-15 22:53:09.770858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-15 22:53:09.770863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.086 the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-04-15 22:53:09.770882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.086 [2024-04-15 22:53:09.770893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29696 len:12[2024-04-15 22:53:09.770899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 the state(5) to be set 00:26:25.086 [2024-04-15 22:53:09.770907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.086 [2024-04-15 22:53:09.770912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.770918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with [2024-04-15 22:53:09.770917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29824 len:128the state(5) to be set 00:26:25.087 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.770924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.770926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.770930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.770935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.770937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.770940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.770945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-15 22:53:09.770946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.770953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.770956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.770958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.770963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with [2024-04-15 22:53:09.770963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:25.087 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.770973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.770976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.770979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.770984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with [2024-04-15 22:53:09.770984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:25.087 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.770991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.770996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with [2024-04-15 22:53:09.770995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:1the state(5) to be set 00:26:25.087 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.771005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.771009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.771014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.771014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.771022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.771024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.771030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.771032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.771041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.771041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.771045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.771050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.771051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.771059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.771062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.771067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.771069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.771077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-15 22:53:09.771078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.771086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.771088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.771096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8cb0 is same with the state(5) to be set 00:26:25.087 [2024-04-15 22:53:09.771096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.771106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.771124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.771143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.771162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.771178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.771198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.771218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.771239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.771259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.771275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.771293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.771310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.771326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.771342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.087 [2024-04-15 22:53:09.771359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-04-15 22:53:09.771366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.771375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.771382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.771392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.771399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.771409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.771416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.771425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.771432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.771442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.771450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.771460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.771468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.771477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.771484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.771493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.771501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.771510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.780855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.780901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.780911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.780922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.780930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.780940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.780948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.780957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.780965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.780974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.780982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.780992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.780999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.781009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.781017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.781026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.781034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.781048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.781056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.781066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.781073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.781082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.781090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.781100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.781108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.781117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.781124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.781134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.781141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.781151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.781159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.781168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.781175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.781185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.781193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.781203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.781212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.781222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.781230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.781239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.781247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.781257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.781269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.781278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-04-15 22:53:09.781286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.781353] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ca2ec0 was disconnected and freed. reset controller. 00:26:25.088 [2024-04-15 22:53:09.782829] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:25.088 [2024-04-15 22:53:09.782863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b42b70 (9): Bad file descriptor 00:26:25.088 [2024-04-15 22:53:09.782912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.088 [2024-04-15 22:53:09.782924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.782934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.088 [2024-04-15 22:53:09.782941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.782949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.088 [2024-04-15 22:53:09.782957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.782965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.088 [2024-04-15 22:53:09.782973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.782980] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5aa0 is same with the state(5) to be set 00:26:25.088 [2024-04-15 22:53:09.783007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.088 [2024-04-15 22:53:09.783016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.783025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.088 [2024-04-15 22:53:09.783032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-04-15 22:53:09.783041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.088 [2024-04-15 22:53:09.783048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.783056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.089 [2024-04-15 22:53:09.783064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.783072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b19a60 is same with the state(5) to be set 00:26:25.089 [2024-04-15 22:53:09.783088] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce4f30 (9): Bad file descriptor 00:26:25.089 [2024-04-15 22:53:09.783106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a84520 (9): Bad file descriptor 00:26:25.089 [2024-04-15 22:53:09.783125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4a730 (9): Bad file descriptor 00:26:25.089 [2024-04-15 22:53:09.783139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1f260 (9): Bad file descriptor 00:26:25.089 [2024-04-15 22:53:09.783154] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5360 (9): Bad file descriptor 00:26:25.089 [2024-04-15 22:53:09.783170] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf84d0 (9): Bad file descriptor 00:26:25.089 [2024-04-15 22:53:09.783188] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b424d0 (9): Bad file descriptor 00:26:25.089 [2024-04-15 22:53:09.785491] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:25.089 [2024-04-15 22:53:09.785609] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:25.089 [2024-04-15 22:53:09.785974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.785990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-04-15 22:53:09.786381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.089 [2024-04-15 22:53:09.786390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd07e0 is same with the state(5) to be set 00:26:25.089 [2024-04-15 22:53:09.786436] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bd07e0 was disconnected and freed. reset controller. 00:26:25.089 [2024-04-15 22:53:09.786469] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:25.089 [2024-04-15 22:53:09.786629] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:25.089 [2024-04-15 22:53:09.787057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-04-15 22:53:09.787452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-04-15 22:53:09.787463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b42b70 with addr=10.0.0.2, port=4420 00:26:25.089 [2024-04-15 22:53:09.787471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b42b70 is same with the state(5) to be set 00:26:25.089 [2024-04-15 22:53:09.787954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-04-15 22:53:09.788214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-04-15 22:53:09.788227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b4a730 with addr=10.0.0.2, port=4420 00:26:25.089 [2024-04-15 22:53:09.788237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4a730 is same with the state(5) to be set 00:26:25.089 [2024-04-15 22:53:09.788289] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:25.089 [2024-04-15 22:53:09.789709] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:25.089 [2024-04-15 22:53:09.789755] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:25.089 [2024-04-15 22:53:09.789818] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:25.089 [2024-04-15 22:53:09.789850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b42b70 (9): Bad file descriptor 00:26:25.089 [2024-04-15 22:53:09.789864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4a730 (9): Bad file descriptor 00:26:25.089 [2024-04-15 22:53:09.789977] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:25.089 [2024-04-15 22:53:09.790301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-04-15 22:53:09.790577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-04-15 22:53:09.790598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5360 with addr=10.0.0.2, port=4420 00:26:25.089 [2024-04-15 22:53:09.790607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5360 is same with the state(5) to be set 00:26:25.089 [2024-04-15 22:53:09.790616] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:25.089 [2024-04-15 22:53:09.790623] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:25.089 [2024-04-15 22:53:09.790632] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:25.089 [2024-04-15 22:53:09.790648] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:25.090 [2024-04-15 22:53:09.790655] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:25.090 [2024-04-15 22:53:09.790662] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:25.090 [2024-04-15 22:53:09.790989] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.090 [2024-04-15 22:53:09.791004] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.090 [2024-04-15 22:53:09.791013] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5360 (9): Bad file descriptor 00:26:25.090 [2024-04-15 22:53:09.791060] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:25.090 [2024-04-15 22:53:09.791067] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:25.090 [2024-04-15 22:53:09.791074] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:25.090 [2024-04-15 22:53:09.791122] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.090 [2024-04-15 22:53:09.792872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5aa0 (9): Bad file descriptor 00:26:25.090 [2024-04-15 22:53:09.792893] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b19a60 (9): Bad file descriptor 00:26:25.090 [2024-04-15 22:53:09.793017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.090 [2024-04-15 22:53:09.793550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-04-15 22:53:09.793558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.793992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.793999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.794009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.794017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.794026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.794033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.794044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.794051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.794061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.794069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.794079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.794087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.794097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.794105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.794114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.794122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.794132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.794140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.794150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.794157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.794167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcea00 is same with the state(5) to be set 00:26:25.091 [2024-04-15 22:53:09.795412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.795427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.795439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.795447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.795459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.795467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.795478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.795487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.795498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-04-15 22:53:09.795506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.091 [2024-04-15 22:53:09.795515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.795984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.795992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.796001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.796011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.796020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.796028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.796037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.796045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.796054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.796063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.796072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.796080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.796090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.796097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.796107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.796115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.796124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.796132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.796142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.796150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.796159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.796167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.796176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.796184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.796193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.796202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.796211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-04-15 22:53:09.796219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-04-15 22:53:09.796232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.796241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.796250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.796258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.796268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.796276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.796285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.796293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.796303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.796311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.796321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.796329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.796339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.796347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.796357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.796364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.796374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.796382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.796392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.796400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.796410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.796418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.796428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.796437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.796446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.796455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.796465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.796473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.796482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.796490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.796500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.796507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.796517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.796524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.796534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.796544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.796554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.796562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.796571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca1920 is same with the state(5) to be set 00:26:25.093 [2024-04-15 22:53:09.797797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.797809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.797822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.797831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.797843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.797852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.797864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.797873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.797885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.797895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.797906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.797919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.797929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.797937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.797946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.797954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.797964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.797972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.797981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.797989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.797999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.798007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.798016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.798024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.798033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.798041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.798051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.798058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.798068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.798076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.798086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.798094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.798103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.798111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.798120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.798129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.798140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.798148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.798157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-04-15 22:53:09.798165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-04-15 22:53:09.798175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.094 [2024-04-15 22:53:09.798865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-04-15 22:53:09.798873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.798884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.798892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.798901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.798909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.798919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.798926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.798936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.798944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.798952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd1d80 is same with the state(5) to be set 00:26:25.095 [2024-04-15 22:53:09.800197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.095 [2024-04-15 22:53:09.800735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-04-15 22:53:09.800742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.800753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.800760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.800769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.800777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.800787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.800795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.800804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.800812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.800821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.800829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.800838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.800846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.800856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.800864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.800874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.800881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.800891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.800899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.800908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.800916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.800926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.800934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.800946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.800954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.800963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.800971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.800980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.800989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.800998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.801336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.801344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd3320 is same with the state(5) to be set 00:26:25.096 [2024-04-15 22:53:09.802591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.802606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.802619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.802631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.802644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.802653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.802665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.802675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.096 [2024-04-15 22:53:09.802685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-04-15 22:53:09.802693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.802703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.802711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.802721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.802729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.802739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.802746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.802756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.802764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.802774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.802782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.802792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.802800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.802810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.802818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.802827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.802835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.802844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.802853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.802864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.802873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.802882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.802890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.802900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.802908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.802918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.802926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.802936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.802944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.802953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.802961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.802971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.802978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.802988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.802995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-04-15 22:53:09.803388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-04-15 22:53:09.803397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.803744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.803754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73a0 is same with the state(5) to be set 00:26:25.098 [2024-04-15 22:53:09.805527] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.098 [2024-04-15 22:53:09.805558] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:25.098 [2024-04-15 22:53:09.805568] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:25.098 [2024-04-15 22:53:09.805578] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:25.098 [2024-04-15 22:53:09.805658] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:25.098 [2024-04-15 22:53:09.805733] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:25.098 [2024-04-15 22:53:09.806033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-04-15 22:53:09.806352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-04-15 22:53:09.806363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1f260 with addr=10.0.0.2, port=4420 00:26:25.098 [2024-04-15 22:53:09.806370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f260 is same with the state(5) to be set 00:26:25.098 [2024-04-15 22:53:09.806754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-04-15 22:53:09.807109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-04-15 22:53:09.807120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a84520 with addr=10.0.0.2, port=4420 00:26:25.098 [2024-04-15 22:53:09.807127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a84520 is same with the state(5) to be set 00:26:25.098 [2024-04-15 22:53:09.807503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-04-15 22:53:09.807824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-04-15 22:53:09.807835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce4f30 with addr=10.0.0.2, port=4420 00:26:25.098 [2024-04-15 22:53:09.807842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4f30 is same with the state(5) to be set 00:26:25.098 [2024-04-15 22:53:09.808252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-04-15 22:53:09.808597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-04-15 22:53:09.808607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bf84d0 with addr=10.0.0.2, port=4420 00:26:25.098 [2024-04-15 22:53:09.808615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf84d0 is same with the state(5) to be set 00:26:25.098 [2024-04-15 22:53:09.809713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.809725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.809738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.809746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.809756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.809764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.809774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.809787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.809797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.809805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.809815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.809822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.809833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.809840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.809850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.809858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.809868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.809876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.809886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.809895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-04-15 22:53:09.809904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-04-15 22:53:09.809912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.809922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.809930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.809940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.809948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.809958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.809966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.809976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.809984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.809994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-04-15 22:53:09.810521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-04-15 22:53:09.810529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.810539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.810552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.810562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.810569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.810579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.810587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.810596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.810604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.810613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.810621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.810631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.810639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.810650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.810658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.810668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.810677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.810686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.810694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.810704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.810711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.810721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.810728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.810738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.810745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.810756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.810763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.810773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.810780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.810790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.810797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.810808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.810815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.810825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.810832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.810842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.810851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.810859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd48c0 is same with the state(5) to be set 00:26:25.100 [2024-04-15 22:53:09.812097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-04-15 22:53:09.812459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-04-15 22:53:09.812467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.812987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.812994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.813004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.813012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.813023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.813030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.813040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.813048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.813056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.813065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.813074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.813082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.813092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.813099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.813109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.813117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.813126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.813134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.813143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.813151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.101 [2024-04-15 22:53:09.813160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.101 [2024-04-15 22:53:09.813168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.102 [2024-04-15 22:53:09.813177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.102 [2024-04-15 22:53:09.813185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.102 [2024-04-15 22:53:09.813195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.102 [2024-04-15 22:53:09.813202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.102 [2024-04-15 22:53:09.813212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.102 [2024-04-15 22:53:09.813220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.102 [2024-04-15 22:53:09.813229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5d90 is same with the state(5) to be set 00:26:25.102 [2024-04-15 22:53:09.814934] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:25.102 [2024-04-15 22:53:09.814955] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:25.102 [2024-04-15 22:53:09.814966] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:25.102 [2024-04-15 22:53:09.814975] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:25.102 task offset: 34048 on job bdev=Nvme4n1 fails 00:26:25.102 00:26:25.102 Latency(us) 00:26:25.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.102 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:25.102 Job: Nvme1n1 ended in about 0.63 seconds with error 00:26:25.102 Verification LBA range: start 0x0 length 0x400 00:26:25.102 Nvme1n1 : 0.63 330.88 20.68 101.81 0.00 146733.98 78206.29 142431.57 00:26:25.102 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:25.102 Job: Nvme2n1 ended in about 0.63 seconds with error 00:26:25.102 Verification LBA range: start 0x0 length 0x400 00:26:25.102 Nvme2n1 : 0.63 329.63 20.60 101.42 0.00 145411.21 90439.68 117964.80 00:26:25.102 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:25.102 Job: Nvme3n1 ended in about 0.62 seconds with error 00:26:25.102 Verification LBA range: start 0x0 length 0x400 00:26:25.102 Nvme3n1 : 0.62 406.29 25.39 103.60 0.00 121183.35 14417.92 130198.19 00:26:25.102 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:25.102 Job: Nvme4n1 ended in about 0.62 seconds with error 00:26:25.102 Verification LBA range: start 0x0 length 0x400 00:26:25.102 Nvme4n1 : 0.62 401.06 25.07 103.92 0.00 120737.26 14308.69 129324.37 00:26:25.102 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:25.102 Job: Nvme5n1 ended in about 0.62 seconds with error 00:26:25.102 Verification LBA range: start 0x0 length 0x400 00:26:25.102 Nvme5n1 : 0.62 399.74 24.98 36.92 0.00 137315.44 3686.40 113595.73 00:26:25.102 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:25.102 Job: Nvme6n1 ended in about 0.63 seconds with error 00:26:25.102 Verification LBA range: start 0x0 length 0x400 00:26:25.102 Nvme6n1 : 0.63 328.39 20.52 101.04 0.00 138434.36 73837.23 117964.80 00:26:25.102 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:25.102 Job: Nvme7n1 ended in about 0.64 seconds with error 00:26:25.102 Verification LBA range: start 0x0 length 0x400 00:26:25.102 Nvme7n1 : 0.64 327.16 20.45 100.66 0.00 137098.74 83449.17 108789.76 00:26:25.102 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:25.102 Job: Nvme8n1 ended in about 0.65 seconds with error 00:26:25.102 Verification LBA range: start 0x0 length 0x400 00:26:25.102 Nvme8n1 : 0.65 322.34 20.15 99.18 0.00 137448.91 75584.85 112721.92 00:26:25.102 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:25.102 Job: Nvme9n1 ended in about 0.65 seconds with error 00:26:25.102 Verification LBA range: start 0x0 length 0x400 00:26:25.102 Nvme9n1 : 0.65 321.17 20.07 98.82 0.00 136185.57 83012.27 113595.73 00:26:25.102 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:25.102 Job: Nvme10n1 ended in about 0.64 seconds with error 00:26:25.102 Verification LBA range: start 0x0 length 0x400 00:26:25.102 Nvme10n1 : 0.64 329.06 20.57 100.29 0.00 131200.96 9721.17 115343.36 00:26:25.102 =================================================================================================================== 00:26:25.102 Total : 3495.73 218.48 947.67 0.00 134756.77 3686.40 142431.57 00:26:25.102 [2024-04-15 22:53:09.840563] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:25.102 [2024-04-15 22:53:09.840618] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:26:25.102 [2024-04-15 22:53:09.840964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.102 [2024-04-15 22:53:09.841083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.102 [2024-04-15 22:53:09.841093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b424d0 with addr=10.0.0.2, port=4420 00:26:25.102 [2024-04-15 22:53:09.841104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b424d0 is same with the state(5) to be set 00:26:25.102 [2024-04-15 22:53:09.841118] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1f260 (9): Bad file descriptor 00:26:25.102 [2024-04-15 22:53:09.841129] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a84520 (9): Bad file descriptor 00:26:25.102 [2024-04-15 22:53:09.841139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce4f30 (9): Bad file descriptor 00:26:25.102 [2024-04-15 22:53:09.841149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf84d0 (9): Bad file descriptor 00:26:25.102 [2024-04-15 22:53:09.841692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.102 [2024-04-15 22:53:09.841928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.102 [2024-04-15 22:53:09.841939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b4a730 with addr=10.0.0.2, port=4420 00:26:25.102 [2024-04-15 22:53:09.841947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4a730 is same with the state(5) to be set 00:26:25.102 [2024-04-15 22:53:09.842338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.102 [2024-04-15 22:53:09.842709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.102 [2024-04-15 22:53:09.842720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b42b70 with addr=10.0.0.2, port=4420 00:26:25.102 [2024-04-15 22:53:09.842727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b42b70 is same with the state(5) to be set 00:26:25.102 [2024-04-15 22:53:09.843095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.102 [2024-04-15 22:53:09.843360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.102 [2024-04-15 22:53:09.843371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5360 with addr=10.0.0.2, port=4420 00:26:25.102 [2024-04-15 22:53:09.843378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5360 is same with the state(5) to be set 00:26:25.102 [2024-04-15 22:53:09.843759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.102 [2024-04-15 22:53:09.844122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.102 [2024-04-15 22:53:09.844133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b19a60 with addr=10.0.0.2, port=4420 00:26:25.102 [2024-04-15 22:53:09.844140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b19a60 is same with the state(5) to be set 00:26:25.102 [2024-04-15 22:53:09.844520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.102 [2024-04-15 22:53:09.844890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.102 [2024-04-15 22:53:09.844900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5aa0 with addr=10.0.0.2, port=4420 00:26:25.102 [2024-04-15 22:53:09.844908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5aa0 is same with the state(5) to be set 00:26:25.102 [2024-04-15 22:53:09.844917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b424d0 (9): Bad file descriptor 00:26:25.102 [2024-04-15 22:53:09.844927] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.102 [2024-04-15 22:53:09.844937] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.102 [2024-04-15 22:53:09.844946] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.102 [2024-04-15 22:53:09.844959] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:25.102 [2024-04-15 22:53:09.844966] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:25.102 [2024-04-15 22:53:09.844973] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:25.102 [2024-04-15 22:53:09.844983] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:25.102 [2024-04-15 22:53:09.844989] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:25.102 [2024-04-15 22:53:09.844996] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:25.102 [2024-04-15 22:53:09.845007] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:25.102 [2024-04-15 22:53:09.845013] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:25.102 [2024-04-15 22:53:09.845020] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:25.102 [2024-04-15 22:53:09.845042] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:25.102 [2024-04-15 22:53:09.845053] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:25.102 [2024-04-15 22:53:09.845062] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:25.102 [2024-04-15 22:53:09.845074] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:25.102 [2024-04-15 22:53:09.845084] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:25.102 [2024-04-15 22:53:09.845691] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.102 [2024-04-15 22:53:09.845704] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.102 [2024-04-15 22:53:09.845710] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.102 [2024-04-15 22:53:09.845717] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.103 [2024-04-15 22:53:09.845725] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4a730 (9): Bad file descriptor 00:26:25.103 [2024-04-15 22:53:09.845734] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b42b70 (9): Bad file descriptor 00:26:25.103 [2024-04-15 22:53:09.845743] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5360 (9): Bad file descriptor 00:26:25.103 [2024-04-15 22:53:09.845753] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b19a60 (9): Bad file descriptor 00:26:25.103 [2024-04-15 22:53:09.845762] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5aa0 (9): Bad file descriptor 00:26:25.103 [2024-04-15 22:53:09.845770] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:25.103 [2024-04-15 22:53:09.845777] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:25.103 [2024-04-15 22:53:09.845784] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:25.103 [2024-04-15 22:53:09.845837] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.103 [2024-04-15 22:53:09.845846] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:25.103 [2024-04-15 22:53:09.845853] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:25.103 [2024-04-15 22:53:09.845863] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:25.103 [2024-04-15 22:53:09.845873] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:25.103 [2024-04-15 22:53:09.845880] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:25.103 [2024-04-15 22:53:09.845887] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:25.103 [2024-04-15 22:53:09.845897] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:25.103 [2024-04-15 22:53:09.845904] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:25.103 [2024-04-15 22:53:09.845910] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:25.103 [2024-04-15 22:53:09.845920] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:25.103 [2024-04-15 22:53:09.845927] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:25.103 [2024-04-15 22:53:09.845934] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:25.103 [2024-04-15 22:53:09.845943] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:26:25.103 [2024-04-15 22:53:09.845950] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:26:25.103 [2024-04-15 22:53:09.845957] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:25.103 [2024-04-15 22:53:09.845993] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.103 [2024-04-15 22:53:09.846001] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.103 [2024-04-15 22:53:09.846007] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.103 [2024-04-15 22:53:09.846013] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.103 [2024-04-15 22:53:09.846020] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.375 22:53:10 -- target/shutdown.sh@135 -- # nvmfpid= 00:26:25.375 22:53:10 -- target/shutdown.sh@138 -- # sleep 1 00:26:26.318 22:53:11 -- target/shutdown.sh@141 -- # kill -9 1237138 00:26:26.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (1237138) - No such process 00:26:26.318 22:53:11 -- target/shutdown.sh@141 -- # true 00:26:26.318 22:53:11 -- target/shutdown.sh@143 -- # stoptarget 00:26:26.318 22:53:11 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:26.319 22:53:11 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:26.319 22:53:11 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:26.319 22:53:11 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:26.319 22:53:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:26.319 22:53:11 -- nvmf/common.sh@116 -- # sync 00:26:26.319 22:53:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:26.319 22:53:11 -- nvmf/common.sh@119 -- # set +e 00:26:26.319 22:53:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:26.319 22:53:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:26.319 rmmod nvme_tcp 00:26:26.319 rmmod nvme_fabrics 00:26:26.319 rmmod nvme_keyring 00:26:26.319 22:53:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:26.580 22:53:11 -- nvmf/common.sh@123 -- # set -e 00:26:26.580 22:53:11 -- nvmf/common.sh@124 -- # return 0 00:26:26.580 22:53:11 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:26:26.580 22:53:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:26.580 22:53:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:26.580 22:53:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:26.580 22:53:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:26.580 22:53:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:26.580 22:53:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.580 22:53:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:26.580 22:53:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.496 22:53:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:28.496 00:26:28.496 real 0m7.489s 00:26:28.496 user 0m17.462s 00:26:28.496 sys 0m1.222s 00:26:28.496 22:53:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:28.496 22:53:13 -- common/autotest_common.sh@10 -- # set +x 00:26:28.496 ************************************ 00:26:28.496 END TEST nvmf_shutdown_tc3 00:26:28.496 ************************************ 00:26:28.496 22:53:13 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:26:28.496 00:26:28.496 real 0m33.508s 00:26:28.496 user 1m17.000s 00:26:28.496 sys 0m10.091s 00:26:28.496 22:53:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:28.496 22:53:13 -- common/autotest_common.sh@10 -- # set +x 00:26:28.496 ************************************ 00:26:28.496 END TEST nvmf_shutdown 00:26:28.496 ************************************ 00:26:28.496 22:53:13 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:26:28.496 22:53:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:28.496 22:53:13 -- common/autotest_common.sh@10 -- # set +x 00:26:28.758 22:53:13 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:26:28.758 22:53:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:28.758 22:53:13 -- common/autotest_common.sh@10 -- # set +x 00:26:28.758 22:53:13 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:26:28.758 22:53:13 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:28.758 22:53:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:28.758 22:53:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:28.758 22:53:13 -- common/autotest_common.sh@10 -- # set +x 00:26:28.758 ************************************ 00:26:28.758 START TEST nvmf_multicontroller 00:26:28.758 ************************************ 00:26:28.758 22:53:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:28.758 * Looking for test storage... 00:26:28.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:28.758 22:53:13 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:28.758 22:53:13 -- nvmf/common.sh@7 -- # uname -s 00:26:28.758 22:53:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.758 22:53:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.758 22:53:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.758 22:53:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.758 22:53:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.758 22:53:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.758 22:53:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.758 22:53:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.758 22:53:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.758 22:53:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.758 22:53:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:28.758 22:53:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:28.758 22:53:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.758 22:53:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.758 22:53:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:28.758 22:53:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:28.758 22:53:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.758 22:53:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.758 22:53:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.758 22:53:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.758 22:53:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.758 22:53:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.759 22:53:13 -- paths/export.sh@5 -- # export PATH 00:26:28.759 22:53:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.759 22:53:13 -- nvmf/common.sh@46 -- # : 0 00:26:28.759 22:53:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:28.759 22:53:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:28.759 22:53:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:28.759 22:53:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.759 22:53:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.759 22:53:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:28.759 22:53:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:28.759 22:53:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:28.759 22:53:13 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:28.759 22:53:13 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:28.759 22:53:13 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:28.759 22:53:13 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:28.759 22:53:13 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:28.759 22:53:13 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:28.759 22:53:13 -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:28.759 22:53:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:28.759 22:53:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.759 22:53:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:28.759 22:53:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:28.759 22:53:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:28.759 22:53:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.759 22:53:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:28.759 22:53:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.759 22:53:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:28.759 22:53:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:28.759 22:53:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:28.759 22:53:13 -- common/autotest_common.sh@10 -- # set +x 00:26:36.908 22:53:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:36.908 22:53:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:36.908 22:53:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:36.908 22:53:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:36.908 22:53:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:36.908 22:53:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:36.908 22:53:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:36.908 22:53:20 -- nvmf/common.sh@294 -- # net_devs=() 00:26:36.908 22:53:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:36.908 22:53:20 -- nvmf/common.sh@295 -- # e810=() 00:26:36.908 22:53:20 -- nvmf/common.sh@295 -- # local -ga e810 00:26:36.908 22:53:20 -- nvmf/common.sh@296 -- # x722=() 00:26:36.908 22:53:20 -- nvmf/common.sh@296 -- # local -ga x722 00:26:36.908 22:53:20 -- nvmf/common.sh@297 -- # mlx=() 00:26:36.909 22:53:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:36.909 22:53:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.909 22:53:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.909 22:53:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.909 22:53:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.909 22:53:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.909 22:53:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.909 22:53:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.909 22:53:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.909 22:53:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.909 22:53:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.909 22:53:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.909 22:53:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:36.909 22:53:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:36.909 22:53:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:36.909 22:53:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:36.909 22:53:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:36.909 22:53:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:36.909 22:53:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:36.909 22:53:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:36.909 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:36.909 22:53:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:36.909 22:53:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:36.909 22:53:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.909 22:53:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.909 22:53:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:36.909 22:53:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:36.909 22:53:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:36.909 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:36.909 22:53:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:36.909 22:53:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:36.909 22:53:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.909 22:53:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.909 22:53:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:36.909 22:53:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:36.909 22:53:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:36.909 22:53:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:36.909 22:53:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:36.909 22:53:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.909 22:53:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:36.909 22:53:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.909 22:53:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:36.909 Found net devices under 0000:31:00.0: cvl_0_0 00:26:36.909 22:53:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.909 22:53:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:36.909 22:53:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.909 22:53:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:36.909 22:53:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.909 22:53:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:36.909 Found net devices under 0000:31:00.1: cvl_0_1 00:26:36.909 22:53:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.909 22:53:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:36.909 22:53:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:36.909 22:53:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:36.909 22:53:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:36.909 22:53:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:36.909 22:53:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.909 22:53:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.909 22:53:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.909 22:53:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:36.909 22:53:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.909 22:53:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.909 22:53:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:36.909 22:53:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.909 22:53:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.909 22:53:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:36.909 22:53:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:36.909 22:53:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.909 22:53:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.909 22:53:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.909 22:53:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.909 22:53:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:36.909 22:53:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.909 22:53:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.909 22:53:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.909 22:53:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:36.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.738 ms 00:26:36.909 00:26:36.909 --- 10.0.0.2 ping statistics --- 00:26:36.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.909 rtt min/avg/max/mdev = 0.738/0.738/0.738/0.000 ms 00:26:36.909 22:53:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:26:36.909 00:26:36.909 --- 10.0.0.1 ping statistics --- 00:26:36.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.909 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:26:36.909 22:53:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.909 22:53:20 -- nvmf/common.sh@410 -- # return 0 00:26:36.909 22:53:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:36.909 22:53:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.909 22:53:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:36.909 22:53:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:36.909 22:53:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.909 22:53:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:36.909 22:53:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:36.909 22:53:20 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:36.909 22:53:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:36.909 22:53:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:36.909 22:53:20 -- common/autotest_common.sh@10 -- # set +x 00:26:36.909 22:53:20 -- nvmf/common.sh@469 -- # nvmfpid=1242344 00:26:36.909 22:53:20 -- nvmf/common.sh@470 -- # waitforlisten 1242344 00:26:36.909 22:53:20 -- common/autotest_common.sh@819 -- # '[' -z 1242344 ']' 00:26:36.909 22:53:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.909 22:53:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:36.909 22:53:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.909 22:53:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:36.909 22:53:20 -- common/autotest_common.sh@10 -- # set +x 00:26:36.909 22:53:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:36.909 [2024-04-15 22:53:20.803426] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:26:36.909 [2024-04-15 22:53:20.803487] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.909 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.909 [2024-04-15 22:53:20.882347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:36.909 [2024-04-15 22:53:20.953598] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:36.909 [2024-04-15 22:53:20.953723] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.909 [2024-04-15 22:53:20.953731] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.909 [2024-04-15 22:53:20.953742] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.909 [2024-04-15 22:53:20.953855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.909 [2024-04-15 22:53:20.954010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.909 [2024-04-15 22:53:20.954011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:36.909 22:53:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:36.909 22:53:21 -- common/autotest_common.sh@852 -- # return 0 00:26:36.909 22:53:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:36.909 22:53:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:36.909 22:53:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.909 22:53:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.909 22:53:21 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.909 22:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.909 22:53:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.909 [2024-04-15 22:53:21.621620] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.909 22:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.909 22:53:21 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:36.909 22:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.909 22:53:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.909 Malloc0 00:26:36.909 22:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.909 22:53:21 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:36.909 22:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.909 22:53:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.909 22:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.909 22:53:21 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:36.909 22:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.909 22:53:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.910 22:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.910 22:53:21 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.910 22:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.910 22:53:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.910 [2024-04-15 22:53:21.688997] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.910 22:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.910 22:53:21 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:36.910 22:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.910 22:53:21 -- common/autotest_common.sh@10 -- # set +x 00:26:36.910 [2024-04-15 22:53:21.700955] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:36.910 22:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.910 22:53:21 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:36.910 22:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.910 22:53:21 -- common/autotest_common.sh@10 -- # set +x 00:26:37.171 Malloc1 00:26:37.171 22:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:37.171 22:53:21 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:37.171 22:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:37.171 22:53:21 -- common/autotest_common.sh@10 -- # set +x 00:26:37.171 22:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:37.171 22:53:21 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:37.171 22:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:37.171 22:53:21 -- common/autotest_common.sh@10 -- # set +x 00:26:37.171 22:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:37.171 22:53:21 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:37.171 22:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:37.171 22:53:21 -- common/autotest_common.sh@10 -- # set +x 00:26:37.171 22:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:37.171 22:53:21 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:37.171 22:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:37.171 22:53:21 -- common/autotest_common.sh@10 -- # set +x 00:26:37.171 22:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:37.171 22:53:21 -- host/multicontroller.sh@44 -- # bdevperf_pid=1242598 00:26:37.171 22:53:21 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:37.171 22:53:21 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:37.171 22:53:21 -- host/multicontroller.sh@47 -- # waitforlisten 1242598 /var/tmp/bdevperf.sock 00:26:37.171 22:53:21 -- common/autotest_common.sh@819 -- # '[' -z 1242598 ']' 00:26:37.171 22:53:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:37.171 22:53:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:37.171 22:53:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:37.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:37.171 22:53:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:37.171 22:53:21 -- common/autotest_common.sh@10 -- # set +x 00:26:38.111 22:53:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:38.111 22:53:22 -- common/autotest_common.sh@852 -- # return 0 00:26:38.111 22:53:22 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:38.111 22:53:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:38.111 22:53:22 -- common/autotest_common.sh@10 -- # set +x 00:26:38.111 NVMe0n1 00:26:38.111 22:53:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:38.111 22:53:22 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:38.111 22:53:22 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:38.111 22:53:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:38.111 22:53:22 -- common/autotest_common.sh@10 -- # set +x 00:26:38.111 22:53:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:38.111 1 00:26:38.111 22:53:22 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:38.111 22:53:22 -- common/autotest_common.sh@640 -- # local es=0 00:26:38.111 22:53:22 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:38.111 22:53:22 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:38.111 22:53:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:38.111 22:53:22 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:38.111 22:53:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:38.111 22:53:22 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:38.111 22:53:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:38.111 22:53:22 -- common/autotest_common.sh@10 -- # set +x 00:26:38.111 request: 00:26:38.111 { 00:26:38.111 "name": "NVMe0", 00:26:38.111 "trtype": "tcp", 00:26:38.111 "traddr": "10.0.0.2", 00:26:38.111 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:38.111 "hostaddr": "10.0.0.2", 00:26:38.111 "hostsvcid": "60000", 00:26:38.111 "adrfam": "ipv4", 00:26:38.111 "trsvcid": "4420", 00:26:38.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:38.112 "method": "bdev_nvme_attach_controller", 00:26:38.112 "req_id": 1 00:26:38.112 } 00:26:38.112 Got JSON-RPC error response 00:26:38.112 response: 00:26:38.112 { 00:26:38.112 "code": -114, 00:26:38.112 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:38.112 } 00:26:38.112 22:53:22 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:38.112 22:53:22 -- common/autotest_common.sh@643 -- # es=1 00:26:38.112 22:53:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:38.112 22:53:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:38.112 22:53:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:38.112 22:53:22 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:38.112 22:53:22 -- common/autotest_common.sh@640 -- # local es=0 00:26:38.112 22:53:22 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:38.112 22:53:22 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:38.112 22:53:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:38.112 22:53:22 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:38.112 22:53:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:38.112 22:53:22 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:38.112 22:53:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:38.112 22:53:22 -- common/autotest_common.sh@10 -- # set +x 00:26:38.112 request: 00:26:38.112 { 00:26:38.112 "name": "NVMe0", 00:26:38.112 "trtype": "tcp", 00:26:38.112 "traddr": "10.0.0.2", 00:26:38.112 "hostaddr": "10.0.0.2", 00:26:38.112 "hostsvcid": "60000", 00:26:38.112 "adrfam": "ipv4", 00:26:38.112 "trsvcid": "4420", 00:26:38.112 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:38.112 "method": "bdev_nvme_attach_controller", 00:26:38.112 "req_id": 1 00:26:38.112 } 00:26:38.112 Got JSON-RPC error response 00:26:38.112 response: 00:26:38.112 { 00:26:38.112 "code": -114, 00:26:38.112 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:38.112 } 00:26:38.112 22:53:22 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:38.112 22:53:22 -- common/autotest_common.sh@643 -- # es=1 00:26:38.112 22:53:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:38.112 22:53:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:38.112 22:53:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:38.112 22:53:22 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:38.112 22:53:22 -- common/autotest_common.sh@640 -- # local es=0 00:26:38.112 22:53:22 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:38.112 22:53:22 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:38.112 22:53:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:38.112 22:53:22 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:38.112 22:53:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:38.112 22:53:22 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:38.112 22:53:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:38.112 22:53:22 -- common/autotest_common.sh@10 -- # set +x 00:26:38.112 request: 00:26:38.112 { 00:26:38.112 "name": "NVMe0", 00:26:38.112 "trtype": "tcp", 00:26:38.112 "traddr": "10.0.0.2", 00:26:38.112 "hostaddr": "10.0.0.2", 00:26:38.112 "hostsvcid": "60000", 00:26:38.112 "adrfam": "ipv4", 00:26:38.112 "trsvcid": "4420", 00:26:38.112 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:38.112 "multipath": "disable", 00:26:38.112 "method": "bdev_nvme_attach_controller", 00:26:38.112 "req_id": 1 00:26:38.112 } 00:26:38.112 Got JSON-RPC error response 00:26:38.112 response: 00:26:38.112 { 00:26:38.112 "code": -114, 00:26:38.112 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:26:38.112 } 00:26:38.112 22:53:22 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:38.112 22:53:22 -- common/autotest_common.sh@643 -- # es=1 00:26:38.112 22:53:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:38.112 22:53:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:38.112 22:53:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:38.112 22:53:22 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:38.112 22:53:22 -- common/autotest_common.sh@640 -- # local es=0 00:26:38.112 22:53:22 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:38.112 22:53:22 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:38.112 22:53:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:38.112 22:53:22 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:38.112 22:53:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:38.112 22:53:22 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:38.112 22:53:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:38.112 22:53:22 -- common/autotest_common.sh@10 -- # set +x 00:26:38.374 request: 00:26:38.374 { 00:26:38.374 "name": "NVMe0", 00:26:38.374 "trtype": "tcp", 00:26:38.374 "traddr": "10.0.0.2", 00:26:38.374 "hostaddr": "10.0.0.2", 00:26:38.374 "hostsvcid": "60000", 00:26:38.374 "adrfam": "ipv4", 00:26:38.374 "trsvcid": "4420", 00:26:38.374 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:38.374 "multipath": "failover", 00:26:38.374 "method": "bdev_nvme_attach_controller", 00:26:38.374 "req_id": 1 00:26:38.374 } 00:26:38.374 Got JSON-RPC error response 00:26:38.374 response: 00:26:38.374 { 00:26:38.374 "code": -114, 00:26:38.374 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:38.374 } 00:26:38.374 22:53:22 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:38.374 22:53:22 -- common/autotest_common.sh@643 -- # es=1 00:26:38.374 22:53:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:38.374 22:53:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:38.374 22:53:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:38.374 22:53:22 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:38.374 22:53:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:38.374 22:53:22 -- common/autotest_common.sh@10 -- # set +x 00:26:38.374 00:26:38.374 22:53:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:38.374 22:53:23 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:38.374 22:53:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:38.374 22:53:23 -- common/autotest_common.sh@10 -- # set +x 00:26:38.374 22:53:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:38.374 22:53:23 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:38.374 22:53:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:38.374 22:53:23 -- common/autotest_common.sh@10 -- # set +x 00:26:38.374 00:26:38.374 22:53:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:38.374 22:53:23 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:38.374 22:53:23 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:38.374 22:53:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:38.374 22:53:23 -- common/autotest_common.sh@10 -- # set +x 00:26:38.374 22:53:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:38.635 22:53:23 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:38.635 22:53:23 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:39.579 0 00:26:39.579 22:53:24 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:39.579 22:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.579 22:53:24 -- common/autotest_common.sh@10 -- # set +x 00:26:39.579 22:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.579 22:53:24 -- host/multicontroller.sh@100 -- # killprocess 1242598 00:26:39.579 22:53:24 -- common/autotest_common.sh@926 -- # '[' -z 1242598 ']' 00:26:39.579 22:53:24 -- common/autotest_common.sh@930 -- # kill -0 1242598 00:26:39.579 22:53:24 -- common/autotest_common.sh@931 -- # uname 00:26:39.579 22:53:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:39.579 22:53:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1242598 00:26:39.579 22:53:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:39.579 22:53:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:39.579 22:53:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1242598' 00:26:39.579 killing process with pid 1242598 00:26:39.579 22:53:24 -- common/autotest_common.sh@945 -- # kill 1242598 00:26:39.579 22:53:24 -- common/autotest_common.sh@950 -- # wait 1242598 00:26:39.839 22:53:24 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:39.839 22:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.839 22:53:24 -- common/autotest_common.sh@10 -- # set +x 00:26:39.839 22:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.839 22:53:24 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:39.839 22:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.839 22:53:24 -- common/autotest_common.sh@10 -- # set +x 00:26:39.839 22:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.839 22:53:24 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:26:39.839 22:53:24 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:39.840 22:53:24 -- common/autotest_common.sh@1597 -- # read -r file 00:26:39.840 22:53:24 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:39.840 22:53:24 -- common/autotest_common.sh@1596 -- # sort -u 00:26:39.840 22:53:24 -- common/autotest_common.sh@1598 -- # cat 00:26:39.840 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:39.840 [2024-04-15 22:53:21.815229] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:26:39.840 [2024-04-15 22:53:21.815290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242598 ] 00:26:39.840 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.840 [2024-04-15 22:53:21.880964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.840 [2024-04-15 22:53:21.943366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.840 [2024-04-15 22:53:23.158540] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name d2f1dca6-b670-44ae-9752-f45183db8dfc already exists 00:26:39.840 [2024-04-15 22:53:23.158577] bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:d2f1dca6-b670-44ae-9752-f45183db8dfc alias for bdev NVMe1n1 00:26:39.840 [2024-04-15 22:53:23.158588] bdev_nvme.c:4183:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:39.840 Running I/O for 1 seconds... 00:26:39.840 00:26:39.840 Latency(us) 00:26:39.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.840 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:39.840 NVMe0n1 : 1.00 20400.22 79.69 0.00 0.00 6255.84 4560.21 11632.64 00:26:39.840 =================================================================================================================== 00:26:39.840 Total : 20400.22 79.69 0.00 0.00 6255.84 4560.21 11632.64 00:26:39.840 Received shutdown signal, test time was about 1.000000 seconds 00:26:39.840 00:26:39.840 Latency(us) 00:26:39.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.840 =================================================================================================================== 00:26:39.840 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:39.840 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:39.840 22:53:24 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:39.840 22:53:24 -- common/autotest_common.sh@1597 -- # read -r file 00:26:39.840 22:53:24 -- host/multicontroller.sh@108 -- # nvmftestfini 00:26:39.840 22:53:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:39.840 22:53:24 -- nvmf/common.sh@116 -- # sync 00:26:39.840 22:53:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:39.840 22:53:24 -- nvmf/common.sh@119 -- # set +e 00:26:39.840 22:53:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:39.840 22:53:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:39.840 rmmod nvme_tcp 00:26:39.840 rmmod nvme_fabrics 00:26:39.840 rmmod nvme_keyring 00:26:39.840 22:53:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:39.840 22:53:24 -- nvmf/common.sh@123 -- # set -e 00:26:39.840 22:53:24 -- nvmf/common.sh@124 -- # return 0 00:26:39.840 22:53:24 -- nvmf/common.sh@477 -- # '[' -n 1242344 ']' 00:26:39.840 22:53:24 -- nvmf/common.sh@478 -- # killprocess 1242344 00:26:39.840 22:53:24 -- common/autotest_common.sh@926 -- # '[' -z 1242344 ']' 00:26:39.840 22:53:24 -- common/autotest_common.sh@930 -- # kill -0 1242344 00:26:39.840 22:53:24 -- common/autotest_common.sh@931 -- # uname 00:26:39.840 22:53:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:39.840 22:53:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1242344 00:26:40.101 22:53:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:40.101 22:53:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:40.101 22:53:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1242344' 00:26:40.101 killing process with pid 1242344 00:26:40.101 22:53:24 -- common/autotest_common.sh@945 -- # kill 1242344 00:26:40.101 22:53:24 -- common/autotest_common.sh@950 -- # wait 1242344 00:26:40.101 22:53:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:40.101 22:53:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:40.101 22:53:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:40.101 22:53:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:40.101 22:53:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:40.101 22:53:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.101 22:53:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.101 22:53:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.648 22:53:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:42.648 00:26:42.648 real 0m13.536s 00:26:42.648 user 0m16.334s 00:26:42.648 sys 0m6.219s 00:26:42.648 22:53:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:42.648 22:53:26 -- common/autotest_common.sh@10 -- # set +x 00:26:42.648 ************************************ 00:26:42.648 END TEST nvmf_multicontroller 00:26:42.648 ************************************ 00:26:42.648 22:53:26 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:42.648 22:53:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:42.648 22:53:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:42.648 22:53:26 -- common/autotest_common.sh@10 -- # set +x 00:26:42.648 ************************************ 00:26:42.648 START TEST nvmf_aer 00:26:42.648 ************************************ 00:26:42.648 22:53:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:42.648 * Looking for test storage... 00:26:42.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:42.648 22:53:27 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:42.648 22:53:27 -- nvmf/common.sh@7 -- # uname -s 00:26:42.648 22:53:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.648 22:53:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.648 22:53:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.648 22:53:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.648 22:53:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.648 22:53:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.648 22:53:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.648 22:53:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:42.648 22:53:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:42.648 22:53:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:42.648 22:53:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:42.648 22:53:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:42.648 22:53:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:42.648 22:53:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:42.648 22:53:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:42.648 22:53:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:42.648 22:53:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.648 22:53:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.648 22:53:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.648 22:53:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.649 22:53:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.649 22:53:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.649 22:53:27 -- paths/export.sh@5 -- # export PATH 00:26:42.649 22:53:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.649 22:53:27 -- nvmf/common.sh@46 -- # : 0 00:26:42.649 22:53:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:42.649 22:53:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:42.649 22:53:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:42.649 22:53:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:42.649 22:53:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:42.649 22:53:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:42.649 22:53:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:42.649 22:53:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:42.649 22:53:27 -- host/aer.sh@11 -- # nvmftestinit 00:26:42.649 22:53:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:42.649 22:53:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:42.649 22:53:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:42.649 22:53:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:42.649 22:53:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:42.649 22:53:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.649 22:53:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:42.649 22:53:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.649 22:53:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:42.649 22:53:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:42.649 22:53:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:42.649 22:53:27 -- common/autotest_common.sh@10 -- # set +x 00:26:50.793 22:53:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:50.793 22:53:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:50.793 22:53:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:50.793 22:53:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:50.793 22:53:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:50.793 22:53:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:50.793 22:53:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:50.793 22:53:34 -- nvmf/common.sh@294 -- # net_devs=() 00:26:50.793 22:53:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:50.793 22:53:34 -- nvmf/common.sh@295 -- # e810=() 00:26:50.793 22:53:34 -- nvmf/common.sh@295 -- # local -ga e810 00:26:50.793 22:53:34 -- nvmf/common.sh@296 -- # x722=() 00:26:50.793 22:53:34 -- nvmf/common.sh@296 -- # local -ga x722 00:26:50.793 22:53:34 -- nvmf/common.sh@297 -- # mlx=() 00:26:50.793 22:53:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:50.793 22:53:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.793 22:53:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.793 22:53:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.793 22:53:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.793 22:53:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.793 22:53:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.793 22:53:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.793 22:53:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.793 22:53:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.793 22:53:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.793 22:53:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.793 22:53:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:50.793 22:53:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:50.793 22:53:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:50.793 22:53:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:50.793 22:53:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:50.793 22:53:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:50.793 22:53:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:50.793 22:53:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:50.793 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:50.793 22:53:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:50.793 22:53:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:50.793 22:53:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.793 22:53:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.793 22:53:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:50.793 22:53:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:50.793 22:53:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:50.793 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:50.793 22:53:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:50.793 22:53:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:50.793 22:53:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.793 22:53:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.793 22:53:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:50.793 22:53:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:50.793 22:53:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:50.793 22:53:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:50.793 22:53:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:50.793 22:53:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.793 22:53:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:50.793 22:53:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.793 22:53:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:50.793 Found net devices under 0000:31:00.0: cvl_0_0 00:26:50.793 22:53:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.793 22:53:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:50.793 22:53:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.793 22:53:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:50.793 22:53:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.793 22:53:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:50.793 Found net devices under 0000:31:00.1: cvl_0_1 00:26:50.793 22:53:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.793 22:53:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:50.793 22:53:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:50.793 22:53:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:50.793 22:53:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:50.793 22:53:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:50.793 22:53:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.793 22:53:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.793 22:53:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.793 22:53:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:50.793 22:53:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.793 22:53:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.793 22:53:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:50.793 22:53:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.793 22:53:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.793 22:53:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:50.793 22:53:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:50.793 22:53:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.793 22:53:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:50.793 22:53:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:50.793 22:53:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.793 22:53:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:50.793 22:53:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.793 22:53:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.793 22:53:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.793 22:53:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:50.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:26:50.793 00:26:50.793 --- 10.0.0.2 ping statistics --- 00:26:50.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.793 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:26:50.793 22:53:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.439 ms 00:26:50.793 00:26:50.793 --- 10.0.0.1 ping statistics --- 00:26:50.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.793 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:26:50.793 22:53:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.793 22:53:35 -- nvmf/common.sh@410 -- # return 0 00:26:50.793 22:53:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:50.793 22:53:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.793 22:53:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:50.793 22:53:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:50.793 22:53:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.793 22:53:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:50.793 22:53:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:50.793 22:53:35 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:50.793 22:53:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:50.793 22:53:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:50.793 22:53:35 -- common/autotest_common.sh@10 -- # set +x 00:26:50.793 22:53:35 -- nvmf/common.sh@469 -- # nvmfpid=1247756 00:26:50.793 22:53:35 -- nvmf/common.sh@470 -- # waitforlisten 1247756 00:26:50.793 22:53:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:50.793 22:53:35 -- common/autotest_common.sh@819 -- # '[' -z 1247756 ']' 00:26:50.793 22:53:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.793 22:53:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:50.793 22:53:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.793 22:53:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:50.793 22:53:35 -- common/autotest_common.sh@10 -- # set +x 00:26:50.793 [2024-04-15 22:53:35.309034] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:26:50.793 [2024-04-15 22:53:35.309099] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.793 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.793 [2024-04-15 22:53:35.387064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:50.793 [2024-04-15 22:53:35.459486] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:50.793 [2024-04-15 22:53:35.459621] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.793 [2024-04-15 22:53:35.459630] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.793 [2024-04-15 22:53:35.459638] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.793 [2024-04-15 22:53:35.459745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.793 [2024-04-15 22:53:35.459863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.793 [2024-04-15 22:53:35.460022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.794 [2024-04-15 22:53:35.460023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.364 22:53:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:51.364 22:53:36 -- common/autotest_common.sh@852 -- # return 0 00:26:51.364 22:53:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:51.364 22:53:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:51.364 22:53:36 -- common/autotest_common.sh@10 -- # set +x 00:26:51.364 22:53:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.364 22:53:36 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:51.364 22:53:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.364 22:53:36 -- common/autotest_common.sh@10 -- # set +x 00:26:51.364 [2024-04-15 22:53:36.131691] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.364 22:53:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.364 22:53:36 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:51.364 22:53:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.364 22:53:36 -- common/autotest_common.sh@10 -- # set +x 00:26:51.364 Malloc0 00:26:51.364 22:53:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.364 22:53:36 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:51.364 22:53:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.364 22:53:36 -- common/autotest_common.sh@10 -- # set +x 00:26:51.364 22:53:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.364 22:53:36 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:51.364 22:53:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.364 22:53:36 -- common/autotest_common.sh@10 -- # set +x 00:26:51.624 22:53:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.624 22:53:36 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:51.624 22:53:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.624 22:53:36 -- common/autotest_common.sh@10 -- # set +x 00:26:51.624 [2024-04-15 22:53:36.191121] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.624 22:53:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.624 22:53:36 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:51.624 22:53:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.624 22:53:36 -- common/autotest_common.sh@10 -- # set +x 00:26:51.624 [2024-04-15 22:53:36.202929] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:51.624 [ 00:26:51.624 { 00:26:51.624 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:51.624 "subtype": "Discovery", 00:26:51.624 "listen_addresses": [], 00:26:51.624 "allow_any_host": true, 00:26:51.624 "hosts": [] 00:26:51.624 }, 00:26:51.624 { 00:26:51.624 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:51.624 "subtype": "NVMe", 00:26:51.624 "listen_addresses": [ 00:26:51.624 { 00:26:51.624 "transport": "TCP", 00:26:51.624 "trtype": "TCP", 00:26:51.624 "adrfam": "IPv4", 00:26:51.624 "traddr": "10.0.0.2", 00:26:51.624 "trsvcid": "4420" 00:26:51.624 } 00:26:51.624 ], 00:26:51.624 "allow_any_host": true, 00:26:51.624 "hosts": [], 00:26:51.624 "serial_number": "SPDK00000000000001", 00:26:51.624 "model_number": "SPDK bdev Controller", 00:26:51.624 "max_namespaces": 2, 00:26:51.624 "min_cntlid": 1, 00:26:51.624 "max_cntlid": 65519, 00:26:51.624 "namespaces": [ 00:26:51.624 { 00:26:51.624 "nsid": 1, 00:26:51.624 "bdev_name": "Malloc0", 00:26:51.624 "name": "Malloc0", 00:26:51.624 "nguid": "5ECC3E62D0144F19BD83A16D23F40E6A", 00:26:51.624 "uuid": "5ecc3e62-d014-4f19-bd83-a16d23f40e6a" 00:26:51.624 } 00:26:51.624 ] 00:26:51.624 } 00:26:51.624 ] 00:26:51.624 22:53:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.624 22:53:36 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:51.624 22:53:36 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:51.624 22:53:36 -- host/aer.sh@33 -- # aerpid=1248024 00:26:51.624 22:53:36 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:51.624 22:53:36 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:51.624 22:53:36 -- common/autotest_common.sh@1244 -- # local i=0 00:26:51.624 22:53:36 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:51.624 22:53:36 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:26:51.624 22:53:36 -- common/autotest_common.sh@1247 -- # i=1 00:26:51.624 22:53:36 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:26:51.624 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.624 22:53:36 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:51.624 22:53:36 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:26:51.624 22:53:36 -- common/autotest_common.sh@1247 -- # i=2 00:26:51.624 22:53:36 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:26:51.624 22:53:36 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:51.624 22:53:36 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:51.624 22:53:36 -- common/autotest_common.sh@1255 -- # return 0 00:26:51.624 22:53:36 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:51.624 22:53:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.624 22:53:36 -- common/autotest_common.sh@10 -- # set +x 00:26:51.883 Malloc1 00:26:51.883 22:53:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.883 22:53:36 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:51.883 22:53:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.883 22:53:36 -- common/autotest_common.sh@10 -- # set +x 00:26:51.883 22:53:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.883 22:53:36 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:51.883 22:53:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.883 22:53:36 -- common/autotest_common.sh@10 -- # set +x 00:26:51.883 [ 00:26:51.883 { 00:26:51.883 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:51.883 "subtype": "Discovery", 00:26:51.883 "listen_addresses": [], 00:26:51.883 "allow_any_host": true, 00:26:51.883 "hosts": [] 00:26:51.883 }, 00:26:51.883 { 00:26:51.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:51.883 "subtype": "NVMe", 00:26:51.883 "listen_addresses": [ 00:26:51.883 { 00:26:51.883 "transport": "TCP", 00:26:51.883 "trtype": "TCP", 00:26:51.883 "adrfam": "IPv4", 00:26:51.883 "traddr": "10.0.0.2", 00:26:51.883 "trsvcid": "4420" 00:26:51.883 } 00:26:51.883 ], 00:26:51.883 "allow_any_host": true, 00:26:51.883 "hosts": [], 00:26:51.883 "serial_number": "SPDK00000000000001", 00:26:51.883 "model_number": "SPDK bdev Controller", 00:26:51.883 "max_namespaces": 2, 00:26:51.883 "min_cntlid": 1, 00:26:51.883 "max_cntlid": 65519, 00:26:51.883 "namespaces": [ 00:26:51.883 { 00:26:51.884 "nsid": 1, 00:26:51.884 "bdev_name": "Malloc0", 00:26:51.884 "name": "Malloc0", 00:26:51.884 "nguid": "5ECC3E62D0144F19BD83A16D23F40E6A", 00:26:51.884 "uuid": "5ecc3e62-d014-4f19-bd83-a16d23f40e6a" 00:26:51.884 }, 00:26:51.884 Asynchronous Event Request test 00:26:51.884 Attaching to 10.0.0.2 00:26:51.884 Attached to 10.0.0.2 00:26:51.884 Registering asynchronous event callbacks... 00:26:51.884 Starting namespace attribute notice tests for all controllers... 00:26:51.884 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:51.884 aer_cb - Changed Namespace 00:26:51.884 Cleaning up... 00:26:51.884 { 00:26:51.884 "nsid": 2, 00:26:51.884 "bdev_name": "Malloc1", 00:26:51.884 "name": "Malloc1", 00:26:51.884 "nguid": "61B6FB474C614B7180ED04CD9A45024C", 00:26:51.884 "uuid": "61b6fb47-4c61-4b71-80ed-04cd9a45024c" 00:26:51.884 } 00:26:51.884 ] 00:26:51.884 } 00:26:51.884 ] 00:26:51.884 22:53:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.884 22:53:36 -- host/aer.sh@43 -- # wait 1248024 00:26:51.884 22:53:36 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:51.884 22:53:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.884 22:53:36 -- common/autotest_common.sh@10 -- # set +x 00:26:51.884 22:53:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.884 22:53:36 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:51.884 22:53:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.884 22:53:36 -- common/autotest_common.sh@10 -- # set +x 00:26:51.884 22:53:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.884 22:53:36 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:51.884 22:53:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.884 22:53:36 -- common/autotest_common.sh@10 -- # set +x 00:26:51.884 22:53:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.884 22:53:36 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:51.884 22:53:36 -- host/aer.sh@51 -- # nvmftestfini 00:26:51.884 22:53:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:51.884 22:53:36 -- nvmf/common.sh@116 -- # sync 00:26:51.884 22:53:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:51.884 22:53:36 -- nvmf/common.sh@119 -- # set +e 00:26:51.884 22:53:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:51.884 22:53:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:51.884 rmmod nvme_tcp 00:26:51.884 rmmod nvme_fabrics 00:26:51.884 rmmod nvme_keyring 00:26:51.884 22:53:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:51.884 22:53:36 -- nvmf/common.sh@123 -- # set -e 00:26:51.884 22:53:36 -- nvmf/common.sh@124 -- # return 0 00:26:51.884 22:53:36 -- nvmf/common.sh@477 -- # '[' -n 1247756 ']' 00:26:51.884 22:53:36 -- nvmf/common.sh@478 -- # killprocess 1247756 00:26:51.884 22:53:36 -- common/autotest_common.sh@926 -- # '[' -z 1247756 ']' 00:26:51.884 22:53:36 -- common/autotest_common.sh@930 -- # kill -0 1247756 00:26:51.884 22:53:36 -- common/autotest_common.sh@931 -- # uname 00:26:51.884 22:53:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:51.884 22:53:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1247756 00:26:51.884 22:53:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:51.884 22:53:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:51.884 22:53:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1247756' 00:26:51.884 killing process with pid 1247756 00:26:51.884 22:53:36 -- common/autotest_common.sh@945 -- # kill 1247756 00:26:51.884 [2024-04-15 22:53:36.665395] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:51.884 22:53:36 -- common/autotest_common.sh@950 -- # wait 1247756 00:26:52.143 22:53:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:52.143 22:53:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:52.143 22:53:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:52.143 22:53:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:52.143 22:53:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:52.143 22:53:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.143 22:53:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:52.143 22:53:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.088 22:53:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:54.088 00:26:54.088 real 0m11.935s 00:26:54.088 user 0m7.709s 00:26:54.088 sys 0m6.480s 00:26:54.088 22:53:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:54.088 22:53:38 -- common/autotest_common.sh@10 -- # set +x 00:26:54.088 ************************************ 00:26:54.088 END TEST nvmf_aer 00:26:54.088 ************************************ 00:26:54.348 22:53:38 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:54.348 22:53:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:54.348 22:53:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:54.348 22:53:38 -- common/autotest_common.sh@10 -- # set +x 00:26:54.348 ************************************ 00:26:54.348 START TEST nvmf_async_init 00:26:54.348 ************************************ 00:26:54.349 22:53:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:54.349 * Looking for test storage... 00:26:54.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:54.349 22:53:39 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:54.349 22:53:39 -- nvmf/common.sh@7 -- # uname -s 00:26:54.349 22:53:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.349 22:53:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.349 22:53:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.349 22:53:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.349 22:53:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:54.349 22:53:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:54.349 22:53:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.349 22:53:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:54.349 22:53:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.349 22:53:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:54.349 22:53:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:54.349 22:53:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:54.349 22:53:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.349 22:53:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:54.349 22:53:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:54.349 22:53:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.349 22:53:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.349 22:53:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.349 22:53:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.349 22:53:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.349 22:53:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.349 22:53:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.349 22:53:39 -- paths/export.sh@5 -- # export PATH 00:26:54.349 22:53:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.349 22:53:39 -- nvmf/common.sh@46 -- # : 0 00:26:54.349 22:53:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:54.349 22:53:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:54.349 22:53:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:54.349 22:53:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.349 22:53:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.349 22:53:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:54.349 22:53:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:54.349 22:53:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:54.349 22:53:39 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:54.349 22:53:39 -- host/async_init.sh@14 -- # null_block_size=512 00:26:54.349 22:53:39 -- host/async_init.sh@15 -- # null_bdev=null0 00:26:54.349 22:53:39 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:54.349 22:53:39 -- host/async_init.sh@20 -- # uuidgen 00:26:54.349 22:53:39 -- host/async_init.sh@20 -- # tr -d - 00:26:54.349 22:53:39 -- host/async_init.sh@20 -- # nguid=d991edb567784f8998d4ea072f441f87 00:26:54.349 22:53:39 -- host/async_init.sh@22 -- # nvmftestinit 00:26:54.349 22:53:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:54.349 22:53:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:54.349 22:53:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:54.349 22:53:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:54.349 22:53:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:54.349 22:53:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.349 22:53:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:54.349 22:53:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.349 22:53:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:54.349 22:53:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:54.349 22:53:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:54.349 22:53:39 -- common/autotest_common.sh@10 -- # set +x 00:27:02.496 22:53:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:02.496 22:53:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:02.496 22:53:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:02.496 22:53:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:02.496 22:53:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:02.496 22:53:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:02.496 22:53:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:02.496 22:53:46 -- nvmf/common.sh@294 -- # net_devs=() 00:27:02.496 22:53:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:02.496 22:53:46 -- nvmf/common.sh@295 -- # e810=() 00:27:02.496 22:53:46 -- nvmf/common.sh@295 -- # local -ga e810 00:27:02.496 22:53:46 -- nvmf/common.sh@296 -- # x722=() 00:27:02.496 22:53:46 -- nvmf/common.sh@296 -- # local -ga x722 00:27:02.496 22:53:46 -- nvmf/common.sh@297 -- # mlx=() 00:27:02.496 22:53:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:02.496 22:53:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.496 22:53:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.496 22:53:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.496 22:53:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.496 22:53:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.496 22:53:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.496 22:53:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.496 22:53:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.496 22:53:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.496 22:53:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.496 22:53:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.496 22:53:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:02.496 22:53:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:02.496 22:53:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:02.496 22:53:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:02.496 22:53:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:02.496 22:53:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:02.496 22:53:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:02.496 22:53:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:02.496 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:02.496 22:53:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:02.496 22:53:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:02.496 22:53:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.496 22:53:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.496 22:53:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:02.496 22:53:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:02.496 22:53:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:02.496 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:02.496 22:53:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:02.496 22:53:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:02.496 22:53:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.496 22:53:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.496 22:53:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:02.496 22:53:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:02.496 22:53:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:02.496 22:53:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:02.496 22:53:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:02.496 22:53:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.496 22:53:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:02.496 22:53:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.497 22:53:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:02.497 Found net devices under 0000:31:00.0: cvl_0_0 00:27:02.497 22:53:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.497 22:53:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:02.497 22:53:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.497 22:53:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:02.497 22:53:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.497 22:53:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:02.497 Found net devices under 0000:31:00.1: cvl_0_1 00:27:02.497 22:53:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.497 22:53:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:02.497 22:53:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:02.497 22:53:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:02.497 22:53:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:02.497 22:53:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:02.497 22:53:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.497 22:53:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.497 22:53:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.497 22:53:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:02.497 22:53:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.497 22:53:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.497 22:53:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:02.497 22:53:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.497 22:53:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.497 22:53:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:02.497 22:53:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:02.497 22:53:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.497 22:53:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:02.497 22:53:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:02.497 22:53:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:02.497 22:53:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:02.497 22:53:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:02.497 22:53:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:02.497 22:53:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:02.497 22:53:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:02.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:02.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:27:02.497 00:27:02.497 --- 10.0.0.2 ping statistics --- 00:27:02.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.497 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:27:02.497 22:53:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:02.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:02.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:27:02.497 00:27:02.497 --- 10.0.0.1 ping statistics --- 00:27:02.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.497 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:27:02.497 22:53:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:02.497 22:53:46 -- nvmf/common.sh@410 -- # return 0 00:27:02.497 22:53:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:02.497 22:53:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:02.497 22:53:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:02.497 22:53:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:02.497 22:53:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:02.497 22:53:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:02.497 22:53:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:02.497 22:53:46 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:02.497 22:53:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:02.497 22:53:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:02.497 22:53:46 -- common/autotest_common.sh@10 -- # set +x 00:27:02.497 22:53:46 -- nvmf/common.sh@469 -- # nvmfpid=1252644 00:27:02.497 22:53:46 -- nvmf/common.sh@470 -- # waitforlisten 1252644 00:27:02.497 22:53:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:02.497 22:53:46 -- common/autotest_common.sh@819 -- # '[' -z 1252644 ']' 00:27:02.497 22:53:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.497 22:53:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:02.497 22:53:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.497 22:53:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:02.497 22:53:46 -- common/autotest_common.sh@10 -- # set +x 00:27:02.497 [2024-04-15 22:53:46.980179] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:27:02.497 [2024-04-15 22:53:46.980246] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.497 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.497 [2024-04-15 22:53:47.058274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.497 [2024-04-15 22:53:47.130117] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:02.497 [2024-04-15 22:53:47.130240] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.497 [2024-04-15 22:53:47.130249] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.497 [2024-04-15 22:53:47.130256] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.497 [2024-04-15 22:53:47.130274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.070 22:53:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:03.070 22:53:47 -- common/autotest_common.sh@852 -- # return 0 00:27:03.070 22:53:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:03.070 22:53:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:03.070 22:53:47 -- common/autotest_common.sh@10 -- # set +x 00:27:03.070 22:53:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:03.070 22:53:47 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:03.070 22:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:03.070 22:53:47 -- common/autotest_common.sh@10 -- # set +x 00:27:03.070 [2024-04-15 22:53:47.789115] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.070 22:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.070 22:53:47 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:03.070 22:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:03.070 22:53:47 -- common/autotest_common.sh@10 -- # set +x 00:27:03.070 null0 00:27:03.070 22:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.070 22:53:47 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:03.070 22:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:03.070 22:53:47 -- common/autotest_common.sh@10 -- # set +x 00:27:03.070 22:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.070 22:53:47 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:03.070 22:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:03.070 22:53:47 -- common/autotest_common.sh@10 -- # set +x 00:27:03.070 22:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.070 22:53:47 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d991edb567784f8998d4ea072f441f87 00:27:03.070 22:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:03.070 22:53:47 -- common/autotest_common.sh@10 -- # set +x 00:27:03.070 22:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.070 22:53:47 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:03.070 22:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:03.070 22:53:47 -- common/autotest_common.sh@10 -- # set +x 00:27:03.070 [2024-04-15 22:53:47.845350] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.070 22:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.070 22:53:47 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:03.070 22:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:03.070 22:53:47 -- common/autotest_common.sh@10 -- # set +x 00:27:03.331 nvme0n1 00:27:03.331 22:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.331 22:53:48 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:03.331 22:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:03.331 22:53:48 -- common/autotest_common.sh@10 -- # set +x 00:27:03.331 [ 00:27:03.331 { 00:27:03.331 "name": "nvme0n1", 00:27:03.331 "aliases": [ 00:27:03.331 "d991edb5-6778-4f89-98d4-ea072f441f87" 00:27:03.331 ], 00:27:03.331 "product_name": "NVMe disk", 00:27:03.331 "block_size": 512, 00:27:03.331 "num_blocks": 2097152, 00:27:03.331 "uuid": "d991edb5-6778-4f89-98d4-ea072f441f87", 00:27:03.331 "assigned_rate_limits": { 00:27:03.331 "rw_ios_per_sec": 0, 00:27:03.331 "rw_mbytes_per_sec": 0, 00:27:03.331 "r_mbytes_per_sec": 0, 00:27:03.331 "w_mbytes_per_sec": 0 00:27:03.331 }, 00:27:03.331 "claimed": false, 00:27:03.331 "zoned": false, 00:27:03.331 "supported_io_types": { 00:27:03.331 "read": true, 00:27:03.331 "write": true, 00:27:03.331 "unmap": false, 00:27:03.331 "write_zeroes": true, 00:27:03.331 "flush": true, 00:27:03.331 "reset": true, 00:27:03.331 "compare": true, 00:27:03.331 "compare_and_write": true, 00:27:03.331 "abort": true, 00:27:03.331 "nvme_admin": true, 00:27:03.331 "nvme_io": true 00:27:03.331 }, 00:27:03.331 "driver_specific": { 00:27:03.331 "nvme": [ 00:27:03.331 { 00:27:03.331 "trid": { 00:27:03.331 "trtype": "TCP", 00:27:03.331 "adrfam": "IPv4", 00:27:03.331 "traddr": "10.0.0.2", 00:27:03.331 "trsvcid": "4420", 00:27:03.331 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:03.331 }, 00:27:03.331 "ctrlr_data": { 00:27:03.331 "cntlid": 1, 00:27:03.331 "vendor_id": "0x8086", 00:27:03.331 "model_number": "SPDK bdev Controller", 00:27:03.331 "serial_number": "00000000000000000000", 00:27:03.331 "firmware_revision": "24.01.1", 00:27:03.331 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:03.331 "oacs": { 00:27:03.331 "security": 0, 00:27:03.331 "format": 0, 00:27:03.331 "firmware": 0, 00:27:03.331 "ns_manage": 0 00:27:03.331 }, 00:27:03.331 "multi_ctrlr": true, 00:27:03.331 "ana_reporting": false 00:27:03.331 }, 00:27:03.331 "vs": { 00:27:03.331 "nvme_version": "1.3" 00:27:03.331 }, 00:27:03.331 "ns_data": { 00:27:03.331 "id": 1, 00:27:03.331 "can_share": true 00:27:03.331 } 00:27:03.331 } 00:27:03.331 ], 00:27:03.331 "mp_policy": "active_passive" 00:27:03.331 } 00:27:03.332 } 00:27:03.332 ] 00:27:03.332 22:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.332 22:53:48 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:03.332 22:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:03.332 22:53:48 -- common/autotest_common.sh@10 -- # set +x 00:27:03.332 [2024-04-15 22:53:48.109932] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:03.332 [2024-04-15 22:53:48.109992] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25f4370 (9): Bad file descriptor 00:27:03.593 [2024-04-15 22:53:48.241632] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:03.593 22:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.593 22:53:48 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:03.593 22:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:03.593 22:53:48 -- common/autotest_common.sh@10 -- # set +x 00:27:03.593 [ 00:27:03.593 { 00:27:03.593 "name": "nvme0n1", 00:27:03.593 "aliases": [ 00:27:03.593 "d991edb5-6778-4f89-98d4-ea072f441f87" 00:27:03.593 ], 00:27:03.593 "product_name": "NVMe disk", 00:27:03.593 "block_size": 512, 00:27:03.593 "num_blocks": 2097152, 00:27:03.593 "uuid": "d991edb5-6778-4f89-98d4-ea072f441f87", 00:27:03.593 "assigned_rate_limits": { 00:27:03.593 "rw_ios_per_sec": 0, 00:27:03.593 "rw_mbytes_per_sec": 0, 00:27:03.593 "r_mbytes_per_sec": 0, 00:27:03.593 "w_mbytes_per_sec": 0 00:27:03.593 }, 00:27:03.593 "claimed": false, 00:27:03.593 "zoned": false, 00:27:03.593 "supported_io_types": { 00:27:03.593 "read": true, 00:27:03.593 "write": true, 00:27:03.593 "unmap": false, 00:27:03.593 "write_zeroes": true, 00:27:03.593 "flush": true, 00:27:03.593 "reset": true, 00:27:03.593 "compare": true, 00:27:03.593 "compare_and_write": true, 00:27:03.593 "abort": true, 00:27:03.593 "nvme_admin": true, 00:27:03.593 "nvme_io": true 00:27:03.593 }, 00:27:03.593 "driver_specific": { 00:27:03.593 "nvme": [ 00:27:03.593 { 00:27:03.593 "trid": { 00:27:03.593 "trtype": "TCP", 00:27:03.593 "adrfam": "IPv4", 00:27:03.593 "traddr": "10.0.0.2", 00:27:03.593 "trsvcid": "4420", 00:27:03.593 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:03.593 }, 00:27:03.593 "ctrlr_data": { 00:27:03.593 "cntlid": 2, 00:27:03.593 "vendor_id": "0x8086", 00:27:03.593 "model_number": "SPDK bdev Controller", 00:27:03.593 "serial_number": "00000000000000000000", 00:27:03.593 "firmware_revision": "24.01.1", 00:27:03.593 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:03.593 "oacs": { 00:27:03.593 "security": 0, 00:27:03.593 "format": 0, 00:27:03.593 "firmware": 0, 00:27:03.593 "ns_manage": 0 00:27:03.593 }, 00:27:03.593 "multi_ctrlr": true, 00:27:03.593 "ana_reporting": false 00:27:03.593 }, 00:27:03.593 "vs": { 00:27:03.593 "nvme_version": "1.3" 00:27:03.593 }, 00:27:03.593 "ns_data": { 00:27:03.593 "id": 1, 00:27:03.593 "can_share": true 00:27:03.593 } 00:27:03.593 } 00:27:03.593 ], 00:27:03.593 "mp_policy": "active_passive" 00:27:03.593 } 00:27:03.593 } 00:27:03.593 ] 00:27:03.593 22:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.593 22:53:48 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.593 22:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:03.593 22:53:48 -- common/autotest_common.sh@10 -- # set +x 00:27:03.593 22:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.593 22:53:48 -- host/async_init.sh@53 -- # mktemp 00:27:03.593 22:53:48 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Z9NCVYVCJu 00:27:03.593 22:53:48 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:03.593 22:53:48 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Z9NCVYVCJu 00:27:03.593 22:53:48 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:03.594 22:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:03.594 22:53:48 -- common/autotest_common.sh@10 -- # set +x 00:27:03.594 22:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.594 22:53:48 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:03.594 22:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:03.594 22:53:48 -- common/autotest_common.sh@10 -- # set +x 00:27:03.594 [2024-04-15 22:53:48.306561] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:03.594 [2024-04-15 22:53:48.306668] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:03.594 22:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.594 22:53:48 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z9NCVYVCJu 00:27:03.594 22:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:03.594 22:53:48 -- common/autotest_common.sh@10 -- # set +x 00:27:03.594 22:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.594 22:53:48 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z9NCVYVCJu 00:27:03.594 22:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:03.594 22:53:48 -- common/autotest_common.sh@10 -- # set +x 00:27:03.594 [2024-04-15 22:53:48.330625] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:03.594 nvme0n1 00:27:03.594 22:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.594 22:53:48 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:03.594 22:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:03.594 22:53:48 -- common/autotest_common.sh@10 -- # set +x 00:27:03.855 [ 00:27:03.855 { 00:27:03.855 "name": "nvme0n1", 00:27:03.855 "aliases": [ 00:27:03.855 "d991edb5-6778-4f89-98d4-ea072f441f87" 00:27:03.855 ], 00:27:03.855 "product_name": "NVMe disk", 00:27:03.855 "block_size": 512, 00:27:03.855 "num_blocks": 2097152, 00:27:03.855 "uuid": "d991edb5-6778-4f89-98d4-ea072f441f87", 00:27:03.855 "assigned_rate_limits": { 00:27:03.855 "rw_ios_per_sec": 0, 00:27:03.855 "rw_mbytes_per_sec": 0, 00:27:03.855 "r_mbytes_per_sec": 0, 00:27:03.855 "w_mbytes_per_sec": 0 00:27:03.855 }, 00:27:03.855 "claimed": false, 00:27:03.855 "zoned": false, 00:27:03.855 "supported_io_types": { 00:27:03.855 "read": true, 00:27:03.855 "write": true, 00:27:03.855 "unmap": false, 00:27:03.855 "write_zeroes": true, 00:27:03.855 "flush": true, 00:27:03.855 "reset": true, 00:27:03.855 "compare": true, 00:27:03.855 "compare_and_write": true, 00:27:03.855 "abort": true, 00:27:03.855 "nvme_admin": true, 00:27:03.855 "nvme_io": true 00:27:03.855 }, 00:27:03.855 "driver_specific": { 00:27:03.855 "nvme": [ 00:27:03.855 { 00:27:03.855 "trid": { 00:27:03.855 "trtype": "TCP", 00:27:03.855 "adrfam": "IPv4", 00:27:03.855 "traddr": "10.0.0.2", 00:27:03.855 "trsvcid": "4421", 00:27:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:03.855 }, 00:27:03.855 "ctrlr_data": { 00:27:03.855 "cntlid": 3, 00:27:03.855 "vendor_id": "0x8086", 00:27:03.855 "model_number": "SPDK bdev Controller", 00:27:03.855 "serial_number": "00000000000000000000", 00:27:03.855 "firmware_revision": "24.01.1", 00:27:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:03.855 "oacs": { 00:27:03.855 "security": 0, 00:27:03.855 "format": 0, 00:27:03.855 "firmware": 0, 00:27:03.855 "ns_manage": 0 00:27:03.855 }, 00:27:03.855 "multi_ctrlr": true, 00:27:03.855 "ana_reporting": false 00:27:03.855 }, 00:27:03.855 "vs": { 00:27:03.855 "nvme_version": "1.3" 00:27:03.855 }, 00:27:03.855 "ns_data": { 00:27:03.855 "id": 1, 00:27:03.855 "can_share": true 00:27:03.855 } 00:27:03.855 } 00:27:03.855 ], 00:27:03.855 "mp_policy": "active_passive" 00:27:03.855 } 00:27:03.855 } 00:27:03.855 ] 00:27:03.855 22:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.855 22:53:48 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.855 22:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:03.855 22:53:48 -- common/autotest_common.sh@10 -- # set +x 00:27:03.855 22:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.855 22:53:48 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.Z9NCVYVCJu 00:27:03.855 22:53:48 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:03.855 22:53:48 -- host/async_init.sh@78 -- # nvmftestfini 00:27:03.855 22:53:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:03.855 22:53:48 -- nvmf/common.sh@116 -- # sync 00:27:03.855 22:53:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:03.855 22:53:48 -- nvmf/common.sh@119 -- # set +e 00:27:03.855 22:53:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:03.855 22:53:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:03.855 rmmod nvme_tcp 00:27:03.855 rmmod nvme_fabrics 00:27:03.855 rmmod nvme_keyring 00:27:03.855 22:53:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:03.855 22:53:48 -- nvmf/common.sh@123 -- # set -e 00:27:03.855 22:53:48 -- nvmf/common.sh@124 -- # return 0 00:27:03.855 22:53:48 -- nvmf/common.sh@477 -- # '[' -n 1252644 ']' 00:27:03.855 22:53:48 -- nvmf/common.sh@478 -- # killprocess 1252644 00:27:03.855 22:53:48 -- common/autotest_common.sh@926 -- # '[' -z 1252644 ']' 00:27:03.855 22:53:48 -- common/autotest_common.sh@930 -- # kill -0 1252644 00:27:03.855 22:53:48 -- common/autotest_common.sh@931 -- # uname 00:27:03.855 22:53:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:03.855 22:53:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1252644 00:27:03.855 22:53:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:03.855 22:53:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:03.855 22:53:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1252644' 00:27:03.855 killing process with pid 1252644 00:27:03.855 22:53:48 -- common/autotest_common.sh@945 -- # kill 1252644 00:27:03.855 22:53:48 -- common/autotest_common.sh@950 -- # wait 1252644 00:27:04.116 22:53:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:04.116 22:53:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:04.116 22:53:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:04.116 22:53:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:04.116 22:53:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:04.116 22:53:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.116 22:53:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:04.116 22:53:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.030 22:53:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:06.030 00:27:06.030 real 0m11.860s 00:27:06.030 user 0m4.156s 00:27:06.030 sys 0m6.140s 00:27:06.030 22:53:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:06.030 22:53:50 -- common/autotest_common.sh@10 -- # set +x 00:27:06.030 ************************************ 00:27:06.030 END TEST nvmf_async_init 00:27:06.030 ************************************ 00:27:06.030 22:53:50 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:06.030 22:53:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:06.030 22:53:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:06.030 22:53:50 -- common/autotest_common.sh@10 -- # set +x 00:27:06.030 ************************************ 00:27:06.030 START TEST dma 00:27:06.030 ************************************ 00:27:06.030 22:53:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:06.292 * Looking for test storage... 00:27:06.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:06.292 22:53:50 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.292 22:53:50 -- nvmf/common.sh@7 -- # uname -s 00:27:06.292 22:53:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.292 22:53:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.292 22:53:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.292 22:53:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.292 22:53:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.292 22:53:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.292 22:53:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.292 22:53:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.292 22:53:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.292 22:53:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.292 22:53:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:06.292 22:53:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:06.292 22:53:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.292 22:53:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.292 22:53:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.292 22:53:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:06.292 22:53:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.292 22:53:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.292 22:53:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.292 22:53:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.292 22:53:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.292 22:53:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.292 22:53:50 -- paths/export.sh@5 -- # export PATH 00:27:06.292 22:53:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.292 22:53:50 -- nvmf/common.sh@46 -- # : 0 00:27:06.292 22:53:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:06.292 22:53:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:06.292 22:53:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:06.292 22:53:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.292 22:53:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.292 22:53:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:06.292 22:53:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:06.292 22:53:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:06.292 22:53:50 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:06.292 22:53:50 -- host/dma.sh@13 -- # exit 0 00:27:06.292 00:27:06.292 real 0m0.103s 00:27:06.292 user 0m0.041s 00:27:06.292 sys 0m0.071s 00:27:06.292 22:53:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:06.292 22:53:50 -- common/autotest_common.sh@10 -- # set +x 00:27:06.292 ************************************ 00:27:06.292 END TEST dma 00:27:06.292 ************************************ 00:27:06.292 22:53:50 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:06.292 22:53:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:06.292 22:53:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:06.292 22:53:50 -- common/autotest_common.sh@10 -- # set +x 00:27:06.292 ************************************ 00:27:06.292 START TEST nvmf_identify 00:27:06.292 ************************************ 00:27:06.292 22:53:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:06.292 * Looking for test storage... 00:27:06.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:06.293 22:53:51 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.293 22:53:51 -- nvmf/common.sh@7 -- # uname -s 00:27:06.293 22:53:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.293 22:53:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.293 22:53:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.293 22:53:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.293 22:53:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.293 22:53:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.293 22:53:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.293 22:53:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.293 22:53:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.293 22:53:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.293 22:53:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:06.293 22:53:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:06.293 22:53:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.293 22:53:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.293 22:53:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.555 22:53:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:06.555 22:53:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.555 22:53:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.555 22:53:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.555 22:53:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.555 22:53:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.555 22:53:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.555 22:53:51 -- paths/export.sh@5 -- # export PATH 00:27:06.555 22:53:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.555 22:53:51 -- nvmf/common.sh@46 -- # : 0 00:27:06.555 22:53:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:06.555 22:53:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:06.555 22:53:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:06.555 22:53:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.555 22:53:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.555 22:53:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:06.555 22:53:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:06.555 22:53:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:06.555 22:53:51 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:06.555 22:53:51 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:06.555 22:53:51 -- host/identify.sh@14 -- # nvmftestinit 00:27:06.555 22:53:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:06.555 22:53:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.555 22:53:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:06.555 22:53:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:06.555 22:53:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:06.555 22:53:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.555 22:53:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:06.555 22:53:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.555 22:53:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:06.555 22:53:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:06.555 22:53:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:06.555 22:53:51 -- common/autotest_common.sh@10 -- # set +x 00:27:14.704 22:53:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:14.704 22:53:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:14.704 22:53:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:14.704 22:53:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:14.704 22:53:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:14.704 22:53:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:14.704 22:53:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:14.704 22:53:58 -- nvmf/common.sh@294 -- # net_devs=() 00:27:14.704 22:53:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:14.704 22:53:58 -- nvmf/common.sh@295 -- # e810=() 00:27:14.704 22:53:58 -- nvmf/common.sh@295 -- # local -ga e810 00:27:14.704 22:53:58 -- nvmf/common.sh@296 -- # x722=() 00:27:14.704 22:53:58 -- nvmf/common.sh@296 -- # local -ga x722 00:27:14.704 22:53:58 -- nvmf/common.sh@297 -- # mlx=() 00:27:14.704 22:53:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:14.704 22:53:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.704 22:53:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.704 22:53:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.704 22:53:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.704 22:53:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.704 22:53:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.704 22:53:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.704 22:53:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.704 22:53:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.705 22:53:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.705 22:53:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.705 22:53:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:14.705 22:53:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:14.705 22:53:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:14.705 22:53:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:14.705 22:53:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:14.705 22:53:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:14.705 22:53:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:14.705 22:53:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:14.705 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:14.705 22:53:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:14.705 22:53:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:14.705 22:53:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.705 22:53:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.705 22:53:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:14.705 22:53:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:14.705 22:53:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:14.705 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:14.705 22:53:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:14.705 22:53:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:14.705 22:53:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.705 22:53:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.705 22:53:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:14.705 22:53:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:14.705 22:53:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:14.705 22:53:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:14.705 22:53:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:14.705 22:53:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.705 22:53:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:14.705 22:53:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.705 22:53:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:14.705 Found net devices under 0000:31:00.0: cvl_0_0 00:27:14.705 22:53:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.705 22:53:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:14.705 22:53:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.705 22:53:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:14.705 22:53:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.705 22:53:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:14.705 Found net devices under 0000:31:00.1: cvl_0_1 00:27:14.705 22:53:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.705 22:53:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:14.705 22:53:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:14.705 22:53:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:14.705 22:53:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:14.705 22:53:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:14.705 22:53:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:14.705 22:53:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:14.705 22:53:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.705 22:53:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:14.705 22:53:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:14.705 22:53:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:14.705 22:53:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:14.705 22:53:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:14.705 22:53:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:14.705 22:53:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:14.705 22:53:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:14.705 22:53:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:14.705 22:53:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:14.705 22:53:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:14.705 22:53:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.705 22:53:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:14.705 22:53:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.705 22:53:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:14.705 22:53:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:14.705 22:53:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:14.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:27:14.705 00:27:14.705 --- 10.0.0.2 ping statistics --- 00:27:14.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.705 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:27:14.705 22:53:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:14.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:27:14.705 00:27:14.705 --- 10.0.0.1 ping statistics --- 00:27:14.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.705 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:27:14.705 22:53:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.705 22:53:58 -- nvmf/common.sh@410 -- # return 0 00:27:14.705 22:53:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:14.705 22:53:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:14.705 22:53:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:14.705 22:53:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:14.705 22:53:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:14.705 22:53:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:14.705 22:53:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:14.705 22:53:58 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:14.705 22:53:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:14.705 22:53:58 -- common/autotest_common.sh@10 -- # set +x 00:27:14.705 22:53:58 -- host/identify.sh@19 -- # nvmfpid=1257575 00:27:14.705 22:53:58 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:14.705 22:53:58 -- host/identify.sh@23 -- # waitforlisten 1257575 00:27:14.705 22:53:58 -- common/autotest_common.sh@819 -- # '[' -z 1257575 ']' 00:27:14.705 22:53:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.705 22:53:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:14.705 22:53:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.705 22:53:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:14.705 22:53:58 -- common/autotest_common.sh@10 -- # set +x 00:27:14.705 22:53:58 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:14.705 [2024-04-15 22:53:58.984559] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:27:14.705 [2024-04-15 22:53:58.984609] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.705 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.705 [2024-04-15 22:53:59.058461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:14.705 [2024-04-15 22:53:59.126331] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:14.705 [2024-04-15 22:53:59.126463] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.705 [2024-04-15 22:53:59.126473] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.705 [2024-04-15 22:53:59.126482] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.705 [2024-04-15 22:53:59.126609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.705 [2024-04-15 22:53:59.126893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:14.705 [2024-04-15 22:53:59.127059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:14.705 [2024-04-15 22:53:59.127059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.967 22:53:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:14.967 22:53:59 -- common/autotest_common.sh@852 -- # return 0 00:27:14.967 22:53:59 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:14.967 22:53:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:14.967 22:53:59 -- common/autotest_common.sh@10 -- # set +x 00:27:14.967 [2024-04-15 22:53:59.760548] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.967 22:53:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:14.967 22:53:59 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:14.967 22:53:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:14.967 22:53:59 -- common/autotest_common.sh@10 -- # set +x 00:27:15.230 22:53:59 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:15.230 22:53:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:15.230 22:53:59 -- common/autotest_common.sh@10 -- # set +x 00:27:15.230 Malloc0 00:27:15.230 22:53:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:15.230 22:53:59 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:15.230 22:53:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:15.230 22:53:59 -- common/autotest_common.sh@10 -- # set +x 00:27:15.230 22:53:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:15.230 22:53:59 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:15.230 22:53:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:15.230 22:53:59 -- common/autotest_common.sh@10 -- # set +x 00:27:15.230 22:53:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:15.230 22:53:59 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:15.230 22:53:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:15.230 22:53:59 -- common/autotest_common.sh@10 -- # set +x 00:27:15.230 [2024-04-15 22:53:59.860052] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:15.230 22:53:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:15.231 22:53:59 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:15.231 22:53:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:15.231 22:53:59 -- common/autotest_common.sh@10 -- # set +x 00:27:15.231 22:53:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:15.231 22:53:59 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:15.231 22:53:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:15.231 22:53:59 -- common/autotest_common.sh@10 -- # set +x 00:27:15.231 [2024-04-15 22:53:59.883916] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:15.231 [ 00:27:15.231 { 00:27:15.231 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:15.231 "subtype": "Discovery", 00:27:15.231 "listen_addresses": [ 00:27:15.231 { 00:27:15.231 "transport": "TCP", 00:27:15.231 "trtype": "TCP", 00:27:15.231 "adrfam": "IPv4", 00:27:15.231 "traddr": "10.0.0.2", 00:27:15.231 "trsvcid": "4420" 00:27:15.231 } 00:27:15.231 ], 00:27:15.231 "allow_any_host": true, 00:27:15.231 "hosts": [] 00:27:15.231 }, 00:27:15.231 { 00:27:15.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:15.231 "subtype": "NVMe", 00:27:15.231 "listen_addresses": [ 00:27:15.231 { 00:27:15.231 "transport": "TCP", 00:27:15.231 "trtype": "TCP", 00:27:15.231 "adrfam": "IPv4", 00:27:15.231 "traddr": "10.0.0.2", 00:27:15.231 "trsvcid": "4420" 00:27:15.231 } 00:27:15.231 ], 00:27:15.231 "allow_any_host": true, 00:27:15.231 "hosts": [], 00:27:15.231 "serial_number": "SPDK00000000000001", 00:27:15.231 "model_number": "SPDK bdev Controller", 00:27:15.231 "max_namespaces": 32, 00:27:15.231 "min_cntlid": 1, 00:27:15.231 "max_cntlid": 65519, 00:27:15.231 "namespaces": [ 00:27:15.231 { 00:27:15.231 "nsid": 1, 00:27:15.231 "bdev_name": "Malloc0", 00:27:15.231 "name": "Malloc0", 00:27:15.231 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:15.231 "eui64": "ABCDEF0123456789", 00:27:15.231 "uuid": "987b9555-90d6-43ea-b22c-6c0c13c69cf5" 00:27:15.231 } 00:27:15.231 ] 00:27:15.231 } 00:27:15.231 ] 00:27:15.231 22:53:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:15.231 22:53:59 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:15.231 [2024-04-15 22:53:59.920943] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:27:15.231 [2024-04-15 22:53:59.921008] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257922 ] 00:27:15.231 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.231 [2024-04-15 22:53:59.954213] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:15.231 [2024-04-15 22:53:59.954270] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:15.231 [2024-04-15 22:53:59.954275] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:15.231 [2024-04-15 22:53:59.954286] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:15.231 [2024-04-15 22:53:59.954294] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:15.231 [2024-04-15 22:53:59.957568] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:15.231 [2024-04-15 22:53:59.957599] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d429e0 0 00:27:15.231 [2024-04-15 22:53:59.965548] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:15.231 [2024-04-15 22:53:59.965560] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:15.231 [2024-04-15 22:53:59.965565] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:15.231 [2024-04-15 22:53:59.965568] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:15.231 [2024-04-15 22:53:59.965604] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.231 [2024-04-15 22:53:59.965611] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.231 [2024-04-15 22:53:59.965615] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d429e0) 00:27:15.231 [2024-04-15 22:53:59.965629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:15.231 [2024-04-15 22:53:59.965647] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa730, cid 0, qid 0 00:27:15.231 [2024-04-15 22:53:59.973553] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.231 [2024-04-15 22:53:59.973562] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.231 [2024-04-15 22:53:59.973566] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.231 [2024-04-15 22:53:59.973575] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1daa730) on tqpair=0x1d429e0 00:27:15.231 [2024-04-15 22:53:59.973588] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:15.231 [2024-04-15 22:53:59.973595] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:15.231 [2024-04-15 22:53:59.973601] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:15.231 [2024-04-15 22:53:59.973615] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.231 [2024-04-15 22:53:59.973619] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.231 [2024-04-15 22:53:59.973623] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d429e0) 00:27:15.231 [2024-04-15 22:53:59.973630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.231 [2024-04-15 22:53:59.973643] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa730, cid 0, qid 0 00:27:15.231 [2024-04-15 22:53:59.973869] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.231 [2024-04-15 22:53:59.973875] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.231 [2024-04-15 22:53:59.973879] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.231 [2024-04-15 22:53:59.973883] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1daa730) on tqpair=0x1d429e0 00:27:15.231 [2024-04-15 22:53:59.973891] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:15.231 [2024-04-15 22:53:59.973899] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:15.231 [2024-04-15 22:53:59.973905] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.231 [2024-04-15 22:53:59.973909] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.231 [2024-04-15 22:53:59.973913] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d429e0) 00:27:15.231 [2024-04-15 22:53:59.973920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.231 [2024-04-15 22:53:59.973930] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa730, cid 0, qid 0 00:27:15.231 [2024-04-15 22:53:59.974156] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.231 [2024-04-15 22:53:59.974162] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.232 [2024-04-15 22:53:59.974166] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.232 [2024-04-15 22:53:59.974170] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1daa730) on tqpair=0x1d429e0 00:27:15.232 [2024-04-15 22:53:59.974176] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:15.232 [2024-04-15 22:53:59.974185] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:15.232 [2024-04-15 22:53:59.974191] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.232 [2024-04-15 22:53:59.974195] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.232 [2024-04-15 22:53:59.974198] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d429e0) 00:27:15.232 [2024-04-15 22:53:59.974205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.232 [2024-04-15 22:53:59.974215] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa730, cid 0, qid 0 00:27:15.232 [2024-04-15 22:53:59.974421] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.232 [2024-04-15 22:53:59.974427] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.232 [2024-04-15 22:53:59.974431] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.232 [2024-04-15 22:53:59.974436] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1daa730) on tqpair=0x1d429e0 00:27:15.232 [2024-04-15 22:53:59.974442] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:15.232 [2024-04-15 22:53:59.974451] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.232 [2024-04-15 22:53:59.974455] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.232 [2024-04-15 22:53:59.974459] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d429e0) 00:27:15.232 [2024-04-15 22:53:59.974466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.232 [2024-04-15 22:53:59.974475] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa730, cid 0, qid 0 00:27:15.232 [2024-04-15 22:53:59.974680] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.232 [2024-04-15 22:53:59.974687] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.232 [2024-04-15 22:53:59.974691] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.232 [2024-04-15 22:53:59.974694] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1daa730) on tqpair=0x1d429e0 00:27:15.232 [2024-04-15 22:53:59.974700] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:15.232 [2024-04-15 22:53:59.974704] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:15.232 [2024-04-15 22:53:59.974712] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:15.232 [2024-04-15 22:53:59.974818] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:15.232 [2024-04-15 22:53:59.974823] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:15.232 [2024-04-15 22:53:59.974831] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.232 [2024-04-15 22:53:59.974835] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.232 [2024-04-15 22:53:59.974839] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d429e0) 00:27:15.232 [2024-04-15 22:53:59.974845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.232 [2024-04-15 22:53:59.974856] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa730, cid 0, qid 0 00:27:15.232 [2024-04-15 22:53:59.975049] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.232 [2024-04-15 22:53:59.975056] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.232 [2024-04-15 22:53:59.975060] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.232 [2024-04-15 22:53:59.975063] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1daa730) on tqpair=0x1d429e0 00:27:15.232 [2024-04-15 22:53:59.975069] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:15.232 [2024-04-15 22:53:59.975078] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.232 [2024-04-15 22:53:59.975082] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.232 [2024-04-15 22:53:59.975085] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d429e0) 00:27:15.232 [2024-04-15 22:53:59.975092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.232 [2024-04-15 22:53:59.975102] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa730, cid 0, qid 0 00:27:15.232 [2024-04-15 22:53:59.975270] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.232 [2024-04-15 22:53:59.975278] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.232 [2024-04-15 22:53:59.975282] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.232 [2024-04-15 22:53:59.975286] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1daa730) on tqpair=0x1d429e0 00:27:15.232 [2024-04-15 22:53:59.975291] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:15.232 [2024-04-15 22:53:59.975296] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:15.232 [2024-04-15 22:53:59.975303] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:15.232 [2024-04-15 22:53:59.975311] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:15.232 [2024-04-15 22:53:59.975320] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.232 [2024-04-15 22:53:59.975323] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.232 [2024-04-15 22:53:59.975327] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d429e0) 00:27:15.232 [2024-04-15 22:53:59.975334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.232 [2024-04-15 22:53:59.975344] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa730, cid 0, qid 0 00:27:15.232 [2024-04-15 22:53:59.975545] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:15.232 [2024-04-15 22:53:59.975552] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:15.232 [2024-04-15 22:53:59.975555] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:15.232 [2024-04-15 22:53:59.975559] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d429e0): datao=0, datal=4096, cccid=0 00:27:15.232 [2024-04-15 22:53:59.975564] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1daa730) on tqpair(0x1d429e0): expected_datao=0, payload_size=4096 00:27:15.232 [2024-04-15 22:53:59.975599] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:15.232 [2024-04-15 22:53:59.975604] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:15.232 [2024-04-15 22:53:59.975770] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.232 [2024-04-15 22:53:59.975777] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.232 [2024-04-15 22:53:59.975780] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.232 [2024-04-15 22:53:59.975784] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1daa730) on tqpair=0x1d429e0 00:27:15.232 [2024-04-15 22:53:59.975792] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:15.232 [2024-04-15 22:53:59.975800] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:15.232 [2024-04-15 22:53:59.975805] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:15.233 [2024-04-15 22:53:59.975810] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:15.233 [2024-04-15 22:53:59.975815] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:15.233 [2024-04-15 22:53:59.975819] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:15.233 [2024-04-15 22:53:59.975828] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:15.233 [2024-04-15 22:53:59.975835] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:53:59.975839] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:53:59.975844] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d429e0) 00:27:15.233 [2024-04-15 22:53:59.975851] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:15.233 [2024-04-15 22:53:59.975862] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa730, cid 0, qid 0 00:27:15.233 [2024-04-15 22:53:59.976074] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.233 [2024-04-15 22:53:59.976080] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.233 [2024-04-15 22:53:59.976084] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:53:59.976088] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1daa730) on tqpair=0x1d429e0 00:27:15.233 [2024-04-15 22:53:59.976096] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:53:59.976100] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:53:59.976103] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d429e0) 00:27:15.233 [2024-04-15 22:53:59.976109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.233 [2024-04-15 22:53:59.976116] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:53:59.976119] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:53:59.976123] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d429e0) 00:27:15.233 [2024-04-15 22:53:59.976129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.233 [2024-04-15 22:53:59.976135] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:53:59.976138] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:53:59.976142] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d429e0) 00:27:15.233 [2024-04-15 22:53:59.976148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.233 [2024-04-15 22:53:59.976153] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:53:59.976157] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:53:59.976160] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d429e0) 00:27:15.233 [2024-04-15 22:53:59.976166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.233 [2024-04-15 22:53:59.976171] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:15.233 [2024-04-15 22:53:59.976181] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:15.233 [2024-04-15 22:53:59.976187] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:53:59.976191] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:53:59.976194] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d429e0) 00:27:15.233 [2024-04-15 22:53:59.976201] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.233 [2024-04-15 22:53:59.976212] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa730, cid 0, qid 0 00:27:15.233 [2024-04-15 22:53:59.976217] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa890, cid 1, qid 0 00:27:15.233 [2024-04-15 22:53:59.976222] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa9f0, cid 2, qid 0 00:27:15.233 [2024-04-15 22:53:59.976227] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daab50, cid 3, qid 0 00:27:15.233 [2024-04-15 22:53:59.976233] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daacb0, cid 4, qid 0 00:27:15.233 [2024-04-15 22:53:59.976492] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.233 [2024-04-15 22:53:59.976498] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.233 [2024-04-15 22:53:59.976502] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:53:59.976505] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1daacb0) on tqpair=0x1d429e0 00:27:15.233 [2024-04-15 22:53:59.976512] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:15.233 [2024-04-15 22:53:59.976517] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:15.233 [2024-04-15 22:53:59.976527] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:53:59.976530] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:53:59.976534] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d429e0) 00:27:15.233 [2024-04-15 22:53:59.976540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.233 [2024-04-15 22:53:59.976555] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daacb0, cid 4, qid 0 00:27:15.233 [2024-04-15 22:53:59.976763] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:15.233 [2024-04-15 22:53:59.976770] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:15.233 [2024-04-15 22:53:59.976773] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:53:59.976777] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d429e0): datao=0, datal=4096, cccid=4 00:27:15.233 [2024-04-15 22:53:59.976782] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1daacb0) on tqpair(0x1d429e0): expected_datao=0, payload_size=4096 00:27:15.233 [2024-04-15 22:53:59.976811] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:53:59.976815] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:54:00.021552] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.233 [2024-04-15 22:54:00.021566] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.233 [2024-04-15 22:54:00.021569] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:54:00.021574] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1daacb0) on tqpair=0x1d429e0 00:27:15.233 [2024-04-15 22:54:00.021589] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:15.233 [2024-04-15 22:54:00.021610] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:54:00.021614] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:54:00.021618] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d429e0) 00:27:15.233 [2024-04-15 22:54:00.021626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.233 [2024-04-15 22:54:00.021633] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:54:00.021638] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.233 [2024-04-15 22:54:00.021642] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d429e0) 00:27:15.234 [2024-04-15 22:54:00.021649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.234 [2024-04-15 22:54:00.021667] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daacb0, cid 4, qid 0 00:27:15.234 [2024-04-15 22:54:00.021672] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daae10, cid 5, qid 0 00:27:15.234 [2024-04-15 22:54:00.021929] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:15.234 [2024-04-15 22:54:00.021941] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:15.234 [2024-04-15 22:54:00.021945] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:15.234 [2024-04-15 22:54:00.021948] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d429e0): datao=0, datal=1024, cccid=4 00:27:15.234 [2024-04-15 22:54:00.021953] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1daacb0) on tqpair(0x1d429e0): expected_datao=0, payload_size=1024 00:27:15.234 [2024-04-15 22:54:00.021960] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:15.234 [2024-04-15 22:54:00.021964] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:15.234 [2024-04-15 22:54:00.021970] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.234 [2024-04-15 22:54:00.021976] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.234 [2024-04-15 22:54:00.021979] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.234 [2024-04-15 22:54:00.021983] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1daae10) on tqpair=0x1d429e0 00:27:15.500 [2024-04-15 22:54:00.062774] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.500 [2024-04-15 22:54:00.062786] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.500 [2024-04-15 22:54:00.062789] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.500 [2024-04-15 22:54:00.062793] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1daacb0) on tqpair=0x1d429e0 00:27:15.500 [2024-04-15 22:54:00.062805] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.500 [2024-04-15 22:54:00.062809] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.500 [2024-04-15 22:54:00.062813] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d429e0) 00:27:15.500 [2024-04-15 22:54:00.062820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.500 [2024-04-15 22:54:00.062835] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daacb0, cid 4, qid 0 00:27:15.500 [2024-04-15 22:54:00.063018] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:15.500 [2024-04-15 22:54:00.063025] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:15.500 [2024-04-15 22:54:00.063028] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:15.500 [2024-04-15 22:54:00.063032] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d429e0): datao=0, datal=3072, cccid=4 00:27:15.500 [2024-04-15 22:54:00.063036] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1daacb0) on tqpair(0x1d429e0): expected_datao=0, payload_size=3072 00:27:15.500 [2024-04-15 22:54:00.063069] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:15.500 [2024-04-15 22:54:00.063074] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:15.500 [2024-04-15 22:54:00.063242] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.500 [2024-04-15 22:54:00.063249] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.500 [2024-04-15 22:54:00.063252] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.500 [2024-04-15 22:54:00.063256] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1daacb0) on tqpair=0x1d429e0 00:27:15.500 [2024-04-15 22:54:00.063265] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.500 [2024-04-15 22:54:00.063269] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.500 [2024-04-15 22:54:00.063272] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d429e0) 00:27:15.500 [2024-04-15 22:54:00.063279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.500 [2024-04-15 22:54:00.063292] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daacb0, cid 4, qid 0 00:27:15.500 [2024-04-15 22:54:00.063495] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:15.500 [2024-04-15 22:54:00.063501] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:15.500 [2024-04-15 22:54:00.063508] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:15.500 [2024-04-15 22:54:00.063512] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d429e0): datao=0, datal=8, cccid=4 00:27:15.500 [2024-04-15 22:54:00.063516] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1daacb0) on tqpair(0x1d429e0): expected_datao=0, payload_size=8 00:27:15.500 [2024-04-15 22:54:00.063523] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:15.500 [2024-04-15 22:54:00.063527] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:15.500 [2024-04-15 22:54:00.104744] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.500 [2024-04-15 22:54:00.104754] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.500 [2024-04-15 22:54:00.104757] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.500 [2024-04-15 22:54:00.104761] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1daacb0) on tqpair=0x1d429e0 00:27:15.500 ===================================================== 00:27:15.500 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:15.500 ===================================================== 00:27:15.500 Controller Capabilities/Features 00:27:15.500 ================================ 00:27:15.500 Vendor ID: 0000 00:27:15.500 Subsystem Vendor ID: 0000 00:27:15.501 Serial Number: .................... 00:27:15.501 Model Number: ........................................ 00:27:15.501 Firmware Version: 24.01.1 00:27:15.501 Recommended Arb Burst: 0 00:27:15.501 IEEE OUI Identifier: 00 00 00 00:27:15.501 Multi-path I/O 00:27:15.501 May have multiple subsystem ports: No 00:27:15.501 May have multiple controllers: No 00:27:15.501 Associated with SR-IOV VF: No 00:27:15.501 Max Data Transfer Size: 131072 00:27:15.501 Max Number of Namespaces: 0 00:27:15.501 Max Number of I/O Queues: 1024 00:27:15.501 NVMe Specification Version (VS): 1.3 00:27:15.501 NVMe Specification Version (Identify): 1.3 00:27:15.501 Maximum Queue Entries: 128 00:27:15.501 Contiguous Queues Required: Yes 00:27:15.501 Arbitration Mechanisms Supported 00:27:15.501 Weighted Round Robin: Not Supported 00:27:15.501 Vendor Specific: Not Supported 00:27:15.501 Reset Timeout: 15000 ms 00:27:15.501 Doorbell Stride: 4 bytes 00:27:15.501 NVM Subsystem Reset: Not Supported 00:27:15.501 Command Sets Supported 00:27:15.501 NVM Command Set: Supported 00:27:15.501 Boot Partition: Not Supported 00:27:15.501 Memory Page Size Minimum: 4096 bytes 00:27:15.501 Memory Page Size Maximum: 4096 bytes 00:27:15.501 Persistent Memory Region: Not Supported 00:27:15.501 Optional Asynchronous Events Supported 00:27:15.501 Namespace Attribute Notices: Not Supported 00:27:15.501 Firmware Activation Notices: Not Supported 00:27:15.501 ANA Change Notices: Not Supported 00:27:15.501 PLE Aggregate Log Change Notices: Not Supported 00:27:15.501 LBA Status Info Alert Notices: Not Supported 00:27:15.501 EGE Aggregate Log Change Notices: Not Supported 00:27:15.501 Normal NVM Subsystem Shutdown event: Not Supported 00:27:15.501 Zone Descriptor Change Notices: Not Supported 00:27:15.501 Discovery Log Change Notices: Supported 00:27:15.501 Controller Attributes 00:27:15.501 128-bit Host Identifier: Not Supported 00:27:15.501 Non-Operational Permissive Mode: Not Supported 00:27:15.501 NVM Sets: Not Supported 00:27:15.501 Read Recovery Levels: Not Supported 00:27:15.501 Endurance Groups: Not Supported 00:27:15.501 Predictable Latency Mode: Not Supported 00:27:15.501 Traffic Based Keep ALive: Not Supported 00:27:15.501 Namespace Granularity: Not Supported 00:27:15.501 SQ Associations: Not Supported 00:27:15.501 UUID List: Not Supported 00:27:15.501 Multi-Domain Subsystem: Not Supported 00:27:15.501 Fixed Capacity Management: Not Supported 00:27:15.501 Variable Capacity Management: Not Supported 00:27:15.501 Delete Endurance Group: Not Supported 00:27:15.501 Delete NVM Set: Not Supported 00:27:15.501 Extended LBA Formats Supported: Not Supported 00:27:15.501 Flexible Data Placement Supported: Not Supported 00:27:15.501 00:27:15.501 Controller Memory Buffer Support 00:27:15.501 ================================ 00:27:15.501 Supported: No 00:27:15.501 00:27:15.501 Persistent Memory Region Support 00:27:15.501 ================================ 00:27:15.501 Supported: No 00:27:15.501 00:27:15.501 Admin Command Set Attributes 00:27:15.501 ============================ 00:27:15.501 Security Send/Receive: Not Supported 00:27:15.501 Format NVM: Not Supported 00:27:15.501 Firmware Activate/Download: Not Supported 00:27:15.501 Namespace Management: Not Supported 00:27:15.501 Device Self-Test: Not Supported 00:27:15.501 Directives: Not Supported 00:27:15.501 NVMe-MI: Not Supported 00:27:15.501 Virtualization Management: Not Supported 00:27:15.501 Doorbell Buffer Config: Not Supported 00:27:15.501 Get LBA Status Capability: Not Supported 00:27:15.501 Command & Feature Lockdown Capability: Not Supported 00:27:15.501 Abort Command Limit: 1 00:27:15.501 Async Event Request Limit: 4 00:27:15.501 Number of Firmware Slots: N/A 00:27:15.501 Firmware Slot 1 Read-Only: N/A 00:27:15.501 Firmware Activation Without Reset: N/A 00:27:15.501 Multiple Update Detection Support: N/A 00:27:15.501 Firmware Update Granularity: No Information Provided 00:27:15.501 Per-Namespace SMART Log: No 00:27:15.501 Asymmetric Namespace Access Log Page: Not Supported 00:27:15.501 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:15.501 Command Effects Log Page: Not Supported 00:27:15.501 Get Log Page Extended Data: Supported 00:27:15.501 Telemetry Log Pages: Not Supported 00:27:15.501 Persistent Event Log Pages: Not Supported 00:27:15.501 Supported Log Pages Log Page: May Support 00:27:15.501 Commands Supported & Effects Log Page: Not Supported 00:27:15.501 Feature Identifiers & Effects Log Page:May Support 00:27:15.501 NVMe-MI Commands & Effects Log Page: May Support 00:27:15.501 Data Area 4 for Telemetry Log: Not Supported 00:27:15.501 Error Log Page Entries Supported: 128 00:27:15.501 Keep Alive: Not Supported 00:27:15.501 00:27:15.501 NVM Command Set Attributes 00:27:15.501 ========================== 00:27:15.501 Submission Queue Entry Size 00:27:15.501 Max: 1 00:27:15.501 Min: 1 00:27:15.501 Completion Queue Entry Size 00:27:15.501 Max: 1 00:27:15.501 Min: 1 00:27:15.501 Number of Namespaces: 0 00:27:15.501 Compare Command: Not Supported 00:27:15.501 Write Uncorrectable Command: Not Supported 00:27:15.501 Dataset Management Command: Not Supported 00:27:15.501 Write Zeroes Command: Not Supported 00:27:15.501 Set Features Save Field: Not Supported 00:27:15.501 Reservations: Not Supported 00:27:15.501 Timestamp: Not Supported 00:27:15.501 Copy: Not Supported 00:27:15.501 Volatile Write Cache: Not Present 00:27:15.501 Atomic Write Unit (Normal): 1 00:27:15.501 Atomic Write Unit (PFail): 1 00:27:15.501 Atomic Compare & Write Unit: 1 00:27:15.501 Fused Compare & Write: Supported 00:27:15.501 Scatter-Gather List 00:27:15.501 SGL Command Set: Supported 00:27:15.501 SGL Keyed: Supported 00:27:15.501 SGL Bit Bucket Descriptor: Not Supported 00:27:15.501 SGL Metadata Pointer: Not Supported 00:27:15.501 Oversized SGL: Not Supported 00:27:15.501 SGL Metadata Address: Not Supported 00:27:15.501 SGL Offset: Supported 00:27:15.501 Transport SGL Data Block: Not Supported 00:27:15.501 Replay Protected Memory Block: Not Supported 00:27:15.501 00:27:15.501 Firmware Slot Information 00:27:15.501 ========================= 00:27:15.501 Active slot: 0 00:27:15.501 00:27:15.501 00:27:15.501 Error Log 00:27:15.501 ========= 00:27:15.501 00:27:15.501 Active Namespaces 00:27:15.501 ================= 00:27:15.501 Discovery Log Page 00:27:15.501 ================== 00:27:15.501 Generation Counter: 2 00:27:15.501 Number of Records: 2 00:27:15.501 Record Format: 0 00:27:15.501 00:27:15.501 Discovery Log Entry 0 00:27:15.501 ---------------------- 00:27:15.501 Transport Type: 3 (TCP) 00:27:15.501 Address Family: 1 (IPv4) 00:27:15.501 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:15.501 Entry Flags: 00:27:15.501 Duplicate Returned Information: 1 00:27:15.501 Explicit Persistent Connection Support for Discovery: 1 00:27:15.501 Transport Requirements: 00:27:15.501 Secure Channel: Not Required 00:27:15.501 Port ID: 0 (0x0000) 00:27:15.501 Controller ID: 65535 (0xffff) 00:27:15.501 Admin Max SQ Size: 128 00:27:15.501 Transport Service Identifier: 4420 00:27:15.501 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:15.501 Transport Address: 10.0.0.2 00:27:15.501 Discovery Log Entry 1 00:27:15.501 ---------------------- 00:27:15.501 Transport Type: 3 (TCP) 00:27:15.501 Address Family: 1 (IPv4) 00:27:15.501 Subsystem Type: 2 (NVM Subsystem) 00:27:15.501 Entry Flags: 00:27:15.501 Duplicate Returned Information: 0 00:27:15.501 Explicit Persistent Connection Support for Discovery: 0 00:27:15.501 Transport Requirements: 00:27:15.501 Secure Channel: Not Required 00:27:15.501 Port ID: 0 (0x0000) 00:27:15.501 Controller ID: 65535 (0xffff) 00:27:15.501 Admin Max SQ Size: 128 00:27:15.501 Transport Service Identifier: 4420 00:27:15.501 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:15.501 Transport Address: 10.0.0.2 [2024-04-15 22:54:00.104851] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:15.501 [2024-04-15 22:54:00.104864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.501 [2024-04-15 22:54:00.104871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.501 [2024-04-15 22:54:00.104877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.501 [2024-04-15 22:54:00.104883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.501 [2024-04-15 22:54:00.104894] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.104898] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.104902] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d429e0) 00:27:15.502 [2024-04-15 22:54:00.104909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.502 [2024-04-15 22:54:00.104924] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daab50, cid 3, qid 0 00:27:15.502 [2024-04-15 22:54:00.105042] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.502 [2024-04-15 22:54:00.105048] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.502 [2024-04-15 22:54:00.105052] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.105055] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1daab50) on tqpair=0x1d429e0 00:27:15.502 [2024-04-15 22:54:00.105064] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.105067] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.105071] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d429e0) 00:27:15.502 [2024-04-15 22:54:00.105077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.502 [2024-04-15 22:54:00.105090] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daab50, cid 3, qid 0 00:27:15.502 [2024-04-15 22:54:00.105310] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.502 [2024-04-15 22:54:00.105316] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.502 [2024-04-15 22:54:00.105319] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.105323] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1daab50) on tqpair=0x1d429e0 00:27:15.502 [2024-04-15 22:54:00.105329] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:15.502 [2024-04-15 22:54:00.105333] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:15.502 [2024-04-15 22:54:00.105344] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.105348] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.105352] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d429e0) 00:27:15.502 [2024-04-15 22:54:00.105358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.502 [2024-04-15 22:54:00.105368] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daab50, cid 3, qid 0 00:27:15.502 [2024-04-15 22:54:00.109549] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.502 [2024-04-15 22:54:00.109557] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.502 [2024-04-15 22:54:00.109561] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.109565] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1daab50) on tqpair=0x1d429e0 00:27:15.502 [2024-04-15 22:54:00.109576] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.109580] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.109584] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d429e0) 00:27:15.502 [2024-04-15 22:54:00.109591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.502 [2024-04-15 22:54:00.109602] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daab50, cid 3, qid 0 00:27:15.502 [2024-04-15 22:54:00.109790] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.502 [2024-04-15 22:54:00.109797] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.502 [2024-04-15 22:54:00.109801] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.109804] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1daab50) on tqpair=0x1d429e0 00:27:15.502 [2024-04-15 22:54:00.109812] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:27:15.502 00:27:15.502 22:54:00 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:15.502 [2024-04-15 22:54:00.147229] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:27:15.502 [2024-04-15 22:54:00.147272] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257944 ] 00:27:15.502 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.502 [2024-04-15 22:54:00.179156] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:15.502 [2024-04-15 22:54:00.179199] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:15.502 [2024-04-15 22:54:00.179204] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:15.502 [2024-04-15 22:54:00.179215] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:15.502 [2024-04-15 22:54:00.179222] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:15.502 [2024-04-15 22:54:00.182575] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:15.502 [2024-04-15 22:54:00.182601] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x208e9e0 0 00:27:15.502 [2024-04-15 22:54:00.190548] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:15.502 [2024-04-15 22:54:00.190559] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:15.502 [2024-04-15 22:54:00.190567] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:15.502 [2024-04-15 22:54:00.190571] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:15.502 [2024-04-15 22:54:00.190603] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.190608] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.190613] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x208e9e0) 00:27:15.502 [2024-04-15 22:54:00.190624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:15.502 [2024-04-15 22:54:00.190639] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6730, cid 0, qid 0 00:27:15.502 [2024-04-15 22:54:00.198554] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.502 [2024-04-15 22:54:00.198563] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.502 [2024-04-15 22:54:00.198566] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.198571] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6730) on tqpair=0x208e9e0 00:27:15.502 [2024-04-15 22:54:00.198583] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:15.502 [2024-04-15 22:54:00.198589] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:15.502 [2024-04-15 22:54:00.198594] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:15.502 [2024-04-15 22:54:00.198607] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.198611] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.198614] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x208e9e0) 00:27:15.502 [2024-04-15 22:54:00.198622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.502 [2024-04-15 22:54:00.198635] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6730, cid 0, qid 0 00:27:15.502 [2024-04-15 22:54:00.198838] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.502 [2024-04-15 22:54:00.198844] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.502 [2024-04-15 22:54:00.198848] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.198852] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6730) on tqpair=0x208e9e0 00:27:15.502 [2024-04-15 22:54:00.198860] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:15.502 [2024-04-15 22:54:00.198867] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:15.502 [2024-04-15 22:54:00.198874] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.198878] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.198881] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x208e9e0) 00:27:15.502 [2024-04-15 22:54:00.198888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.502 [2024-04-15 22:54:00.198898] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6730, cid 0, qid 0 00:27:15.502 [2024-04-15 22:54:00.199106] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.502 [2024-04-15 22:54:00.199112] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.502 [2024-04-15 22:54:00.199116] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.199120] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6730) on tqpair=0x208e9e0 00:27:15.502 [2024-04-15 22:54:00.199125] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:15.502 [2024-04-15 22:54:00.199136] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:15.502 [2024-04-15 22:54:00.199143] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.199146] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.199150] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x208e9e0) 00:27:15.502 [2024-04-15 22:54:00.199157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.502 [2024-04-15 22:54:00.199167] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6730, cid 0, qid 0 00:27:15.502 [2024-04-15 22:54:00.199380] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.502 [2024-04-15 22:54:00.199386] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.502 [2024-04-15 22:54:00.199389] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.199393] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6730) on tqpair=0x208e9e0 00:27:15.502 [2024-04-15 22:54:00.199399] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:15.502 [2024-04-15 22:54:00.199409] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.199412] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.502 [2024-04-15 22:54:00.199416] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x208e9e0) 00:27:15.503 [2024-04-15 22:54:00.199423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.503 [2024-04-15 22:54:00.199432] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6730, cid 0, qid 0 00:27:15.503 [2024-04-15 22:54:00.199599] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.503 [2024-04-15 22:54:00.199615] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.503 [2024-04-15 22:54:00.199618] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.199622] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6730) on tqpair=0x208e9e0 00:27:15.503 [2024-04-15 22:54:00.199628] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:15.503 [2024-04-15 22:54:00.199632] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:15.503 [2024-04-15 22:54:00.199640] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:15.503 [2024-04-15 22:54:00.199745] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:15.503 [2024-04-15 22:54:00.199749] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:15.503 [2024-04-15 22:54:00.199758] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.199762] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.199765] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x208e9e0) 00:27:15.503 [2024-04-15 22:54:00.199772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.503 [2024-04-15 22:54:00.199782] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6730, cid 0, qid 0 00:27:15.503 [2024-04-15 22:54:00.199945] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.503 [2024-04-15 22:54:00.199951] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.503 [2024-04-15 22:54:00.199955] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.199958] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6730) on tqpair=0x208e9e0 00:27:15.503 [2024-04-15 22:54:00.199966] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:15.503 [2024-04-15 22:54:00.199975] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.199979] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.199983] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x208e9e0) 00:27:15.503 [2024-04-15 22:54:00.199989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.503 [2024-04-15 22:54:00.199999] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6730, cid 0, qid 0 00:27:15.503 [2024-04-15 22:54:00.200161] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.503 [2024-04-15 22:54:00.200167] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.503 [2024-04-15 22:54:00.200171] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.200174] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6730) on tqpair=0x208e9e0 00:27:15.503 [2024-04-15 22:54:00.200180] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:15.503 [2024-04-15 22:54:00.200185] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:15.503 [2024-04-15 22:54:00.200192] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:15.503 [2024-04-15 22:54:00.200200] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:15.503 [2024-04-15 22:54:00.200208] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.200212] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.200216] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x208e9e0) 00:27:15.503 [2024-04-15 22:54:00.200222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.503 [2024-04-15 22:54:00.200232] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6730, cid 0, qid 0 00:27:15.503 [2024-04-15 22:54:00.200432] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:15.503 [2024-04-15 22:54:00.200439] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:15.503 [2024-04-15 22:54:00.200442] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.200446] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x208e9e0): datao=0, datal=4096, cccid=0 00:27:15.503 [2024-04-15 22:54:00.200451] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20f6730) on tqpair(0x208e9e0): expected_datao=0, payload_size=4096 00:27:15.503 [2024-04-15 22:54:00.200481] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.200486] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.241695] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.503 [2024-04-15 22:54:00.241705] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.503 [2024-04-15 22:54:00.241709] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.241712] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6730) on tqpair=0x208e9e0 00:27:15.503 [2024-04-15 22:54:00.241722] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:15.503 [2024-04-15 22:54:00.241730] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:15.503 [2024-04-15 22:54:00.241734] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:15.503 [2024-04-15 22:54:00.241741] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:15.503 [2024-04-15 22:54:00.241745] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:15.503 [2024-04-15 22:54:00.241750] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:15.503 [2024-04-15 22:54:00.241758] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:15.503 [2024-04-15 22:54:00.241765] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.241769] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.241773] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x208e9e0) 00:27:15.503 [2024-04-15 22:54:00.241780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:15.503 [2024-04-15 22:54:00.241792] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6730, cid 0, qid 0 00:27:15.503 [2024-04-15 22:54:00.241923] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.503 [2024-04-15 22:54:00.241929] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.503 [2024-04-15 22:54:00.241933] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.241936] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6730) on tqpair=0x208e9e0 00:27:15.503 [2024-04-15 22:54:00.241944] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.241947] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.241951] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x208e9e0) 00:27:15.503 [2024-04-15 22:54:00.241957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.503 [2024-04-15 22:54:00.241963] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.241966] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.241970] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x208e9e0) 00:27:15.503 [2024-04-15 22:54:00.241976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.503 [2024-04-15 22:54:00.241981] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.241985] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.241988] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x208e9e0) 00:27:15.503 [2024-04-15 22:54:00.241994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.503 [2024-04-15 22:54:00.242000] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.242003] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.242007] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x208e9e0) 00:27:15.503 [2024-04-15 22:54:00.242012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.503 [2024-04-15 22:54:00.242017] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:15.503 [2024-04-15 22:54:00.242027] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:15.503 [2024-04-15 22:54:00.242033] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.242037] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.242042] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x208e9e0) 00:27:15.503 [2024-04-15 22:54:00.242049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.503 [2024-04-15 22:54:00.242060] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6730, cid 0, qid 0 00:27:15.503 [2024-04-15 22:54:00.242065] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6890, cid 1, qid 0 00:27:15.503 [2024-04-15 22:54:00.242070] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f69f0, cid 2, qid 0 00:27:15.503 [2024-04-15 22:54:00.242074] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6b50, cid 3, qid 0 00:27:15.503 [2024-04-15 22:54:00.242079] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6cb0, cid 4, qid 0 00:27:15.503 [2024-04-15 22:54:00.242292] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.503 [2024-04-15 22:54:00.242298] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.503 [2024-04-15 22:54:00.242302] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.503 [2024-04-15 22:54:00.242305] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6cb0) on tqpair=0x208e9e0 00:27:15.503 [2024-04-15 22:54:00.242311] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:15.503 [2024-04-15 22:54:00.242316] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:15.504 [2024-04-15 22:54:00.242324] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:15.504 [2024-04-15 22:54:00.242330] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:15.504 [2024-04-15 22:54:00.242336] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.242340] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.242343] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x208e9e0) 00:27:15.504 [2024-04-15 22:54:00.242350] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:15.504 [2024-04-15 22:54:00.242359] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6cb0, cid 4, qid 0 00:27:15.504 [2024-04-15 22:54:00.242527] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.504 [2024-04-15 22:54:00.242533] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.504 [2024-04-15 22:54:00.242537] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.242540] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6cb0) on tqpair=0x208e9e0 00:27:15.504 [2024-04-15 22:54:00.246598] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:15.504 [2024-04-15 22:54:00.246608] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:15.504 [2024-04-15 22:54:00.246616] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.246620] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.246623] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x208e9e0) 00:27:15.504 [2024-04-15 22:54:00.246630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.504 [2024-04-15 22:54:00.246641] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6cb0, cid 4, qid 0 00:27:15.504 [2024-04-15 22:54:00.246841] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:15.504 [2024-04-15 22:54:00.246848] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:15.504 [2024-04-15 22:54:00.246854] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.246858] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x208e9e0): datao=0, datal=4096, cccid=4 00:27:15.504 [2024-04-15 22:54:00.246862] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20f6cb0) on tqpair(0x208e9e0): expected_datao=0, payload_size=4096 00:27:15.504 [2024-04-15 22:54:00.246870] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.246873] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.247053] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.504 [2024-04-15 22:54:00.247059] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.504 [2024-04-15 22:54:00.247062] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.247066] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6cb0) on tqpair=0x208e9e0 00:27:15.504 [2024-04-15 22:54:00.247076] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:15.504 [2024-04-15 22:54:00.247092] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:15.504 [2024-04-15 22:54:00.247101] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:15.504 [2024-04-15 22:54:00.247107] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.247111] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.247114] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x208e9e0) 00:27:15.504 [2024-04-15 22:54:00.247121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.504 [2024-04-15 22:54:00.247131] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6cb0, cid 4, qid 0 00:27:15.504 [2024-04-15 22:54:00.247320] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:15.504 [2024-04-15 22:54:00.247326] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:15.504 [2024-04-15 22:54:00.247330] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.247333] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x208e9e0): datao=0, datal=4096, cccid=4 00:27:15.504 [2024-04-15 22:54:00.247338] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20f6cb0) on tqpair(0x208e9e0): expected_datao=0, payload_size=4096 00:27:15.504 [2024-04-15 22:54:00.247345] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.247349] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.247513] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.504 [2024-04-15 22:54:00.247519] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.504 [2024-04-15 22:54:00.247522] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.247526] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6cb0) on tqpair=0x208e9e0 00:27:15.504 [2024-04-15 22:54:00.247539] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:15.504 [2024-04-15 22:54:00.247552] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:15.504 [2024-04-15 22:54:00.247558] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.247562] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.247566] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x208e9e0) 00:27:15.504 [2024-04-15 22:54:00.247572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.504 [2024-04-15 22:54:00.247585] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6cb0, cid 4, qid 0 00:27:15.504 [2024-04-15 22:54:00.247806] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:15.504 [2024-04-15 22:54:00.247812] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:15.504 [2024-04-15 22:54:00.247816] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.247819] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x208e9e0): datao=0, datal=4096, cccid=4 00:27:15.504 [2024-04-15 22:54:00.247823] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20f6cb0) on tqpair(0x208e9e0): expected_datao=0, payload_size=4096 00:27:15.504 [2024-04-15 22:54:00.247830] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.247834] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.248053] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.504 [2024-04-15 22:54:00.248060] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.504 [2024-04-15 22:54:00.248063] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.248067] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6cb0) on tqpair=0x208e9e0 00:27:15.504 [2024-04-15 22:54:00.248075] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:15.504 [2024-04-15 22:54:00.248083] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:15.504 [2024-04-15 22:54:00.248091] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:15.504 [2024-04-15 22:54:00.248097] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:15.504 [2024-04-15 22:54:00.248102] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:15.504 [2024-04-15 22:54:00.248107] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:15.504 [2024-04-15 22:54:00.248112] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:15.504 [2024-04-15 22:54:00.248117] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:15.504 [2024-04-15 22:54:00.248130] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.248134] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.248137] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x208e9e0) 00:27:15.504 [2024-04-15 22:54:00.248143] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.504 [2024-04-15 22:54:00.248150] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.248153] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.248157] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x208e9e0) 00:27:15.504 [2024-04-15 22:54:00.248163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.504 [2024-04-15 22:54:00.248176] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6cb0, cid 4, qid 0 00:27:15.504 [2024-04-15 22:54:00.248181] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6e10, cid 5, qid 0 00:27:15.504 [2024-04-15 22:54:00.248379] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.504 [2024-04-15 22:54:00.248386] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.504 [2024-04-15 22:54:00.248391] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.248395] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6cb0) on tqpair=0x208e9e0 00:27:15.504 [2024-04-15 22:54:00.248402] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.504 [2024-04-15 22:54:00.248408] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.504 [2024-04-15 22:54:00.248411] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.248415] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6e10) on tqpair=0x208e9e0 00:27:15.504 [2024-04-15 22:54:00.248424] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.248428] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.248431] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x208e9e0) 00:27:15.504 [2024-04-15 22:54:00.248438] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.504 [2024-04-15 22:54:00.248447] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6e10, cid 5, qid 0 00:27:15.504 [2024-04-15 22:54:00.248666] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.504 [2024-04-15 22:54:00.248673] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.504 [2024-04-15 22:54:00.248676] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.504 [2024-04-15 22:54:00.248680] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6e10) on tqpair=0x208e9e0 00:27:15.505 [2024-04-15 22:54:00.248689] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.248693] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.248696] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x208e9e0) 00:27:15.505 [2024-04-15 22:54:00.248702] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.505 [2024-04-15 22:54:00.248712] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6e10, cid 5, qid 0 00:27:15.505 [2024-04-15 22:54:00.248922] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.505 [2024-04-15 22:54:00.248929] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.505 [2024-04-15 22:54:00.248932] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.248936] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6e10) on tqpair=0x208e9e0 00:27:15.505 [2024-04-15 22:54:00.248946] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.248949] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.248953] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x208e9e0) 00:27:15.505 [2024-04-15 22:54:00.248959] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.505 [2024-04-15 22:54:00.248968] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6e10, cid 5, qid 0 00:27:15.505 [2024-04-15 22:54:00.249192] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.505 [2024-04-15 22:54:00.249198] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.505 [2024-04-15 22:54:00.249202] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249205] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6e10) on tqpair=0x208e9e0 00:27:15.505 [2024-04-15 22:54:00.249217] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249220] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249224] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x208e9e0) 00:27:15.505 [2024-04-15 22:54:00.249230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.505 [2024-04-15 22:54:00.249240] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249244] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249247] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x208e9e0) 00:27:15.505 [2024-04-15 22:54:00.249253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.505 [2024-04-15 22:54:00.249260] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249264] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249267] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x208e9e0) 00:27:15.505 [2024-04-15 22:54:00.249273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.505 [2024-04-15 22:54:00.249280] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249284] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249287] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x208e9e0) 00:27:15.505 [2024-04-15 22:54:00.249293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.505 [2024-04-15 22:54:00.249304] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6e10, cid 5, qid 0 00:27:15.505 [2024-04-15 22:54:00.249309] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6cb0, cid 4, qid 0 00:27:15.505 [2024-04-15 22:54:00.249314] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6f70, cid 6, qid 0 00:27:15.505 [2024-04-15 22:54:00.249318] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f70d0, cid 7, qid 0 00:27:15.505 [2024-04-15 22:54:00.249654] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:15.505 [2024-04-15 22:54:00.249660] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:15.505 [2024-04-15 22:54:00.249663] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249667] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x208e9e0): datao=0, datal=8192, cccid=5 00:27:15.505 [2024-04-15 22:54:00.249671] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20f6e10) on tqpair(0x208e9e0): expected_datao=0, payload_size=8192 00:27:15.505 [2024-04-15 22:54:00.249718] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249722] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249728] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:15.505 [2024-04-15 22:54:00.249734] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:15.505 [2024-04-15 22:54:00.249737] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249740] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x208e9e0): datao=0, datal=512, cccid=4 00:27:15.505 [2024-04-15 22:54:00.249745] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20f6cb0) on tqpair(0x208e9e0): expected_datao=0, payload_size=512 00:27:15.505 [2024-04-15 22:54:00.249752] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249755] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249761] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:15.505 [2024-04-15 22:54:00.249766] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:15.505 [2024-04-15 22:54:00.249770] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249773] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x208e9e0): datao=0, datal=512, cccid=6 00:27:15.505 [2024-04-15 22:54:00.249779] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20f6f70) on tqpair(0x208e9e0): expected_datao=0, payload_size=512 00:27:15.505 [2024-04-15 22:54:00.249786] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249789] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249795] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:15.505 [2024-04-15 22:54:00.249801] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:15.505 [2024-04-15 22:54:00.249804] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249807] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x208e9e0): datao=0, datal=4096, cccid=7 00:27:15.505 [2024-04-15 22:54:00.249811] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20f70d0) on tqpair(0x208e9e0): expected_datao=0, payload_size=4096 00:27:15.505 [2024-04-15 22:54:00.249823] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249826] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249852] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.505 [2024-04-15 22:54:00.249858] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.505 [2024-04-15 22:54:00.249861] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249865] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6e10) on tqpair=0x208e9e0 00:27:15.505 [2024-04-15 22:54:00.249879] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.505 [2024-04-15 22:54:00.249885] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.505 [2024-04-15 22:54:00.249888] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249892] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6cb0) on tqpair=0x208e9e0 00:27:15.505 [2024-04-15 22:54:00.249901] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.505 [2024-04-15 22:54:00.249907] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.505 [2024-04-15 22:54:00.249910] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249914] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6f70) on tqpair=0x208e9e0 00:27:15.505 [2024-04-15 22:54:00.249921] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.505 [2024-04-15 22:54:00.249927] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.505 [2024-04-15 22:54:00.249930] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.505 [2024-04-15 22:54:00.249934] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f70d0) on tqpair=0x208e9e0 00:27:15.505 ===================================================== 00:27:15.505 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:15.505 ===================================================== 00:27:15.505 Controller Capabilities/Features 00:27:15.506 ================================ 00:27:15.506 Vendor ID: 8086 00:27:15.506 Subsystem Vendor ID: 8086 00:27:15.506 Serial Number: SPDK00000000000001 00:27:15.506 Model Number: SPDK bdev Controller 00:27:15.506 Firmware Version: 24.01.1 00:27:15.506 Recommended Arb Burst: 6 00:27:15.506 IEEE OUI Identifier: e4 d2 5c 00:27:15.506 Multi-path I/O 00:27:15.506 May have multiple subsystem ports: Yes 00:27:15.506 May have multiple controllers: Yes 00:27:15.506 Associated with SR-IOV VF: No 00:27:15.506 Max Data Transfer Size: 131072 00:27:15.506 Max Number of Namespaces: 32 00:27:15.506 Max Number of I/O Queues: 127 00:27:15.506 NVMe Specification Version (VS): 1.3 00:27:15.506 NVMe Specification Version (Identify): 1.3 00:27:15.506 Maximum Queue Entries: 128 00:27:15.506 Contiguous Queues Required: Yes 00:27:15.506 Arbitration Mechanisms Supported 00:27:15.506 Weighted Round Robin: Not Supported 00:27:15.506 Vendor Specific: Not Supported 00:27:15.506 Reset Timeout: 15000 ms 00:27:15.506 Doorbell Stride: 4 bytes 00:27:15.506 NVM Subsystem Reset: Not Supported 00:27:15.506 Command Sets Supported 00:27:15.506 NVM Command Set: Supported 00:27:15.506 Boot Partition: Not Supported 00:27:15.506 Memory Page Size Minimum: 4096 bytes 00:27:15.506 Memory Page Size Maximum: 4096 bytes 00:27:15.506 Persistent Memory Region: Not Supported 00:27:15.506 Optional Asynchronous Events Supported 00:27:15.506 Namespace Attribute Notices: Supported 00:27:15.506 Firmware Activation Notices: Not Supported 00:27:15.506 ANA Change Notices: Not Supported 00:27:15.506 PLE Aggregate Log Change Notices: Not Supported 00:27:15.506 LBA Status Info Alert Notices: Not Supported 00:27:15.506 EGE Aggregate Log Change Notices: Not Supported 00:27:15.506 Normal NVM Subsystem Shutdown event: Not Supported 00:27:15.506 Zone Descriptor Change Notices: Not Supported 00:27:15.506 Discovery Log Change Notices: Not Supported 00:27:15.506 Controller Attributes 00:27:15.506 128-bit Host Identifier: Supported 00:27:15.506 Non-Operational Permissive Mode: Not Supported 00:27:15.506 NVM Sets: Not Supported 00:27:15.506 Read Recovery Levels: Not Supported 00:27:15.506 Endurance Groups: Not Supported 00:27:15.506 Predictable Latency Mode: Not Supported 00:27:15.506 Traffic Based Keep ALive: Not Supported 00:27:15.506 Namespace Granularity: Not Supported 00:27:15.506 SQ Associations: Not Supported 00:27:15.506 UUID List: Not Supported 00:27:15.506 Multi-Domain Subsystem: Not Supported 00:27:15.506 Fixed Capacity Management: Not Supported 00:27:15.506 Variable Capacity Management: Not Supported 00:27:15.506 Delete Endurance Group: Not Supported 00:27:15.506 Delete NVM Set: Not Supported 00:27:15.506 Extended LBA Formats Supported: Not Supported 00:27:15.506 Flexible Data Placement Supported: Not Supported 00:27:15.506 00:27:15.506 Controller Memory Buffer Support 00:27:15.506 ================================ 00:27:15.506 Supported: No 00:27:15.506 00:27:15.506 Persistent Memory Region Support 00:27:15.506 ================================ 00:27:15.506 Supported: No 00:27:15.506 00:27:15.506 Admin Command Set Attributes 00:27:15.506 ============================ 00:27:15.506 Security Send/Receive: Not Supported 00:27:15.506 Format NVM: Not Supported 00:27:15.506 Firmware Activate/Download: Not Supported 00:27:15.506 Namespace Management: Not Supported 00:27:15.506 Device Self-Test: Not Supported 00:27:15.506 Directives: Not Supported 00:27:15.506 NVMe-MI: Not Supported 00:27:15.506 Virtualization Management: Not Supported 00:27:15.506 Doorbell Buffer Config: Not Supported 00:27:15.506 Get LBA Status Capability: Not Supported 00:27:15.506 Command & Feature Lockdown Capability: Not Supported 00:27:15.506 Abort Command Limit: 4 00:27:15.506 Async Event Request Limit: 4 00:27:15.506 Number of Firmware Slots: N/A 00:27:15.506 Firmware Slot 1 Read-Only: N/A 00:27:15.506 Firmware Activation Without Reset: N/A 00:27:15.506 Multiple Update Detection Support: N/A 00:27:15.506 Firmware Update Granularity: No Information Provided 00:27:15.506 Per-Namespace SMART Log: No 00:27:15.506 Asymmetric Namespace Access Log Page: Not Supported 00:27:15.506 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:15.506 Command Effects Log Page: Supported 00:27:15.506 Get Log Page Extended Data: Supported 00:27:15.506 Telemetry Log Pages: Not Supported 00:27:15.506 Persistent Event Log Pages: Not Supported 00:27:15.506 Supported Log Pages Log Page: May Support 00:27:15.506 Commands Supported & Effects Log Page: Not Supported 00:27:15.506 Feature Identifiers & Effects Log Page:May Support 00:27:15.506 NVMe-MI Commands & Effects Log Page: May Support 00:27:15.506 Data Area 4 for Telemetry Log: Not Supported 00:27:15.506 Error Log Page Entries Supported: 128 00:27:15.506 Keep Alive: Supported 00:27:15.506 Keep Alive Granularity: 10000 ms 00:27:15.506 00:27:15.506 NVM Command Set Attributes 00:27:15.506 ========================== 00:27:15.506 Submission Queue Entry Size 00:27:15.506 Max: 64 00:27:15.506 Min: 64 00:27:15.506 Completion Queue Entry Size 00:27:15.506 Max: 16 00:27:15.506 Min: 16 00:27:15.506 Number of Namespaces: 32 00:27:15.506 Compare Command: Supported 00:27:15.506 Write Uncorrectable Command: Not Supported 00:27:15.506 Dataset Management Command: Supported 00:27:15.506 Write Zeroes Command: Supported 00:27:15.506 Set Features Save Field: Not Supported 00:27:15.506 Reservations: Supported 00:27:15.506 Timestamp: Not Supported 00:27:15.506 Copy: Supported 00:27:15.506 Volatile Write Cache: Present 00:27:15.506 Atomic Write Unit (Normal): 1 00:27:15.506 Atomic Write Unit (PFail): 1 00:27:15.506 Atomic Compare & Write Unit: 1 00:27:15.506 Fused Compare & Write: Supported 00:27:15.506 Scatter-Gather List 00:27:15.506 SGL Command Set: Supported 00:27:15.506 SGL Keyed: Supported 00:27:15.506 SGL Bit Bucket Descriptor: Not Supported 00:27:15.506 SGL Metadata Pointer: Not Supported 00:27:15.506 Oversized SGL: Not Supported 00:27:15.506 SGL Metadata Address: Not Supported 00:27:15.506 SGL Offset: Supported 00:27:15.506 Transport SGL Data Block: Not Supported 00:27:15.506 Replay Protected Memory Block: Not Supported 00:27:15.506 00:27:15.506 Firmware Slot Information 00:27:15.506 ========================= 00:27:15.506 Active slot: 1 00:27:15.506 Slot 1 Firmware Revision: 24.01.1 00:27:15.506 00:27:15.506 00:27:15.506 Commands Supported and Effects 00:27:15.506 ============================== 00:27:15.506 Admin Commands 00:27:15.506 -------------- 00:27:15.506 Get Log Page (02h): Supported 00:27:15.506 Identify (06h): Supported 00:27:15.506 Abort (08h): Supported 00:27:15.506 Set Features (09h): Supported 00:27:15.506 Get Features (0Ah): Supported 00:27:15.506 Asynchronous Event Request (0Ch): Supported 00:27:15.506 Keep Alive (18h): Supported 00:27:15.506 I/O Commands 00:27:15.506 ------------ 00:27:15.506 Flush (00h): Supported LBA-Change 00:27:15.506 Write (01h): Supported LBA-Change 00:27:15.506 Read (02h): Supported 00:27:15.506 Compare (05h): Supported 00:27:15.506 Write Zeroes (08h): Supported LBA-Change 00:27:15.506 Dataset Management (09h): Supported LBA-Change 00:27:15.506 Copy (19h): Supported LBA-Change 00:27:15.506 Unknown (79h): Supported LBA-Change 00:27:15.506 Unknown (7Ah): Supported 00:27:15.506 00:27:15.506 Error Log 00:27:15.506 ========= 00:27:15.506 00:27:15.506 Arbitration 00:27:15.506 =========== 00:27:15.506 Arbitration Burst: 1 00:27:15.506 00:27:15.506 Power Management 00:27:15.506 ================ 00:27:15.506 Number of Power States: 1 00:27:15.506 Current Power State: Power State #0 00:27:15.506 Power State #0: 00:27:15.506 Max Power: 0.00 W 00:27:15.506 Non-Operational State: Operational 00:27:15.506 Entry Latency: Not Reported 00:27:15.506 Exit Latency: Not Reported 00:27:15.506 Relative Read Throughput: 0 00:27:15.506 Relative Read Latency: 0 00:27:15.506 Relative Write Throughput: 0 00:27:15.506 Relative Write Latency: 0 00:27:15.506 Idle Power: Not Reported 00:27:15.506 Active Power: Not Reported 00:27:15.506 Non-Operational Permissive Mode: Not Supported 00:27:15.506 00:27:15.506 Health Information 00:27:15.506 ================== 00:27:15.506 Critical Warnings: 00:27:15.506 Available Spare Space: OK 00:27:15.506 Temperature: OK 00:27:15.506 Device Reliability: OK 00:27:15.506 Read Only: No 00:27:15.506 Volatile Memory Backup: OK 00:27:15.506 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:15.506 Temperature Threshold: [2024-04-15 22:54:00.250040] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.506 [2024-04-15 22:54:00.250046] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.506 [2024-04-15 22:54:00.250049] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x208e9e0) 00:27:15.506 [2024-04-15 22:54:00.250056] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.507 [2024-04-15 22:54:00.250067] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f70d0, cid 7, qid 0 00:27:15.507 [2024-04-15 22:54:00.250277] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.507 [2024-04-15 22:54:00.250283] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.507 [2024-04-15 22:54:00.250286] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.250290] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f70d0) on tqpair=0x208e9e0 00:27:15.507 [2024-04-15 22:54:00.250321] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:15.507 [2024-04-15 22:54:00.250333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.507 [2024-04-15 22:54:00.250339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.507 [2024-04-15 22:54:00.250347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.507 [2024-04-15 22:54:00.250353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.507 [2024-04-15 22:54:00.250361] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.250365] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.250369] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x208e9e0) 00:27:15.507 [2024-04-15 22:54:00.250376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.507 [2024-04-15 22:54:00.250387] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6b50, cid 3, qid 0 00:27:15.507 [2024-04-15 22:54:00.254552] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.507 [2024-04-15 22:54:00.254561] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.507 [2024-04-15 22:54:00.254564] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.254568] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6b50) on tqpair=0x208e9e0 00:27:15.507 [2024-04-15 22:54:00.254576] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.254579] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.254583] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x208e9e0) 00:27:15.507 [2024-04-15 22:54:00.254589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.507 [2024-04-15 22:54:00.254603] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6b50, cid 3, qid 0 00:27:15.507 [2024-04-15 22:54:00.254810] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.507 [2024-04-15 22:54:00.254816] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.507 [2024-04-15 22:54:00.254819] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.254823] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6b50) on tqpair=0x208e9e0 00:27:15.507 [2024-04-15 22:54:00.254828] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:15.507 [2024-04-15 22:54:00.254833] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:15.507 [2024-04-15 22:54:00.254842] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.254846] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.254849] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x208e9e0) 00:27:15.507 [2024-04-15 22:54:00.254856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.507 [2024-04-15 22:54:00.254865] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6b50, cid 3, qid 0 00:27:15.507 [2024-04-15 22:54:00.255072] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.507 [2024-04-15 22:54:00.255079] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.507 [2024-04-15 22:54:00.255082] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.255086] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6b50) on tqpair=0x208e9e0 00:27:15.507 [2024-04-15 22:54:00.255096] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.255100] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.255103] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x208e9e0) 00:27:15.507 [2024-04-15 22:54:00.255110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.507 [2024-04-15 22:54:00.255122] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6b50, cid 3, qid 0 00:27:15.507 [2024-04-15 22:54:00.255338] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.507 [2024-04-15 22:54:00.255344] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.507 [2024-04-15 22:54:00.255347] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.255351] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6b50) on tqpair=0x208e9e0 00:27:15.507 [2024-04-15 22:54:00.255361] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.255365] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.255368] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x208e9e0) 00:27:15.507 [2024-04-15 22:54:00.255375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.507 [2024-04-15 22:54:00.255384] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6b50, cid 3, qid 0 00:27:15.507 [2024-04-15 22:54:00.255609] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.507 [2024-04-15 22:54:00.255615] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.507 [2024-04-15 22:54:00.255619] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.255623] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6b50) on tqpair=0x208e9e0 00:27:15.507 [2024-04-15 22:54:00.255633] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.255636] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.255640] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x208e9e0) 00:27:15.507 [2024-04-15 22:54:00.255646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.507 [2024-04-15 22:54:00.255656] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6b50, cid 3, qid 0 00:27:15.507 [2024-04-15 22:54:00.255877] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.507 [2024-04-15 22:54:00.255883] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.507 [2024-04-15 22:54:00.255887] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.255890] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6b50) on tqpair=0x208e9e0 00:27:15.507 [2024-04-15 22:54:00.255900] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.255904] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.255908] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x208e9e0) 00:27:15.507 [2024-04-15 22:54:00.255914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.507 [2024-04-15 22:54:00.255924] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6b50, cid 3, qid 0 00:27:15.507 [2024-04-15 22:54:00.256115] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.507 [2024-04-15 22:54:00.256121] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.507 [2024-04-15 22:54:00.256125] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.256129] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6b50) on tqpair=0x208e9e0 00:27:15.507 [2024-04-15 22:54:00.256138] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.256142] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.256146] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x208e9e0) 00:27:15.507 [2024-04-15 22:54:00.256152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.507 [2024-04-15 22:54:00.256164] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6b50, cid 3, qid 0 00:27:15.507 [2024-04-15 22:54:00.256338] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.507 [2024-04-15 22:54:00.256344] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.507 [2024-04-15 22:54:00.256347] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.256351] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6b50) on tqpair=0x208e9e0 00:27:15.507 [2024-04-15 22:54:00.256360] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.256364] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.256368] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x208e9e0) 00:27:15.507 [2024-04-15 22:54:00.256374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.507 [2024-04-15 22:54:00.256384] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6b50, cid 3, qid 0 00:27:15.507 [2024-04-15 22:54:00.256559] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.507 [2024-04-15 22:54:00.256565] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.507 [2024-04-15 22:54:00.256568] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.256572] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6b50) on tqpair=0x208e9e0 00:27:15.507 [2024-04-15 22:54:00.256582] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.256586] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.256589] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x208e9e0) 00:27:15.507 [2024-04-15 22:54:00.256596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.507 [2024-04-15 22:54:00.256605] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6b50, cid 3, qid 0 00:27:15.507 [2024-04-15 22:54:00.256779] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.507 [2024-04-15 22:54:00.256785] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.507 [2024-04-15 22:54:00.256788] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.256792] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6b50) on tqpair=0x208e9e0 00:27:15.507 [2024-04-15 22:54:00.256802] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.256805] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.507 [2024-04-15 22:54:00.256809] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x208e9e0) 00:27:15.507 [2024-04-15 22:54:00.256815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.508 [2024-04-15 22:54:00.256825] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6b50, cid 3, qid 0 00:27:15.508 [2024-04-15 22:54:00.257023] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.508 [2024-04-15 22:54:00.257030] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.508 [2024-04-15 22:54:00.257033] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.257037] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6b50) on tqpair=0x208e9e0 00:27:15.508 [2024-04-15 22:54:00.257046] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.257050] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.257053] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x208e9e0) 00:27:15.508 [2024-04-15 22:54:00.257060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.508 [2024-04-15 22:54:00.257070] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6b50, cid 3, qid 0 00:27:15.508 [2024-04-15 22:54:00.257273] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.508 [2024-04-15 22:54:00.257280] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.508 [2024-04-15 22:54:00.257283] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.257287] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6b50) on tqpair=0x208e9e0 00:27:15.508 [2024-04-15 22:54:00.257296] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.257300] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.257304] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x208e9e0) 00:27:15.508 [2024-04-15 22:54:00.257310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.508 [2024-04-15 22:54:00.257320] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6b50, cid 3, qid 0 00:27:15.508 [2024-04-15 22:54:00.257502] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.508 [2024-04-15 22:54:00.257509] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.508 [2024-04-15 22:54:00.257512] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.257515] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6b50) on tqpair=0x208e9e0 00:27:15.508 [2024-04-15 22:54:00.257525] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.257529] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.257533] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x208e9e0) 00:27:15.508 [2024-04-15 22:54:00.257539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.508 [2024-04-15 22:54:00.257553] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6b50, cid 3, qid 0 00:27:15.508 [2024-04-15 22:54:00.257759] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.508 [2024-04-15 22:54:00.257766] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.508 [2024-04-15 22:54:00.257769] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.257773] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6b50) on tqpair=0x208e9e0 00:27:15.508 [2024-04-15 22:54:00.257784] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.257788] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.257793] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x208e9e0) 00:27:15.508 [2024-04-15 22:54:00.257800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.508 [2024-04-15 22:54:00.257811] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6b50, cid 3, qid 0 00:27:15.508 [2024-04-15 22:54:00.257995] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.508 [2024-04-15 22:54:00.258001] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.508 [2024-04-15 22:54:00.258005] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.258008] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6b50) on tqpair=0x208e9e0 00:27:15.508 [2024-04-15 22:54:00.258018] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.258022] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.258025] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x208e9e0) 00:27:15.508 [2024-04-15 22:54:00.258032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.508 [2024-04-15 22:54:00.258042] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6b50, cid 3, qid 0 00:27:15.508 [2024-04-15 22:54:00.258213] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.508 [2024-04-15 22:54:00.258220] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.508 [2024-04-15 22:54:00.258223] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.258227] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6b50) on tqpair=0x208e9e0 00:27:15.508 [2024-04-15 22:54:00.258237] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.258240] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.258244] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x208e9e0) 00:27:15.508 [2024-04-15 22:54:00.258251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.508 [2024-04-15 22:54:00.258260] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6b50, cid 3, qid 0 00:27:15.508 [2024-04-15 22:54:00.258446] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.508 [2024-04-15 22:54:00.258452] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.508 [2024-04-15 22:54:00.258455] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.258459] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6b50) on tqpair=0x208e9e0 00:27:15.508 [2024-04-15 22:54:00.258469] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.258473] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.258476] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x208e9e0) 00:27:15.508 [2024-04-15 22:54:00.258483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.508 [2024-04-15 22:54:00.258492] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f6b50, cid 3, qid 0 00:27:15.508 [2024-04-15 22:54:00.262550] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:15.508 [2024-04-15 22:54:00.262559] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:15.508 [2024-04-15 22:54:00.262563] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:15.508 [2024-04-15 22:54:00.262566] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f6b50) on tqpair=0x208e9e0 00:27:15.508 [2024-04-15 22:54:00.262575] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:27:15.508 0 Kelvin (-273 Celsius) 00:27:15.508 Available Spare: 0% 00:27:15.508 Available Spare Threshold: 0% 00:27:15.508 Life Percentage Used: 0% 00:27:15.508 Data Units Read: 0 00:27:15.508 Data Units Written: 0 00:27:15.508 Host Read Commands: 0 00:27:15.508 Host Write Commands: 0 00:27:15.508 Controller Busy Time: 0 minutes 00:27:15.508 Power Cycles: 0 00:27:15.508 Power On Hours: 0 hours 00:27:15.508 Unsafe Shutdowns: 0 00:27:15.508 Unrecoverable Media Errors: 0 00:27:15.508 Lifetime Error Log Entries: 0 00:27:15.508 Warning Temperature Time: 0 minutes 00:27:15.508 Critical Temperature Time: 0 minutes 00:27:15.508 00:27:15.508 Number of Queues 00:27:15.508 ================ 00:27:15.508 Number of I/O Submission Queues: 127 00:27:15.508 Number of I/O Completion Queues: 127 00:27:15.508 00:27:15.508 Active Namespaces 00:27:15.508 ================= 00:27:15.508 Namespace ID:1 00:27:15.508 Error Recovery Timeout: Unlimited 00:27:15.508 Command Set Identifier: NVM (00h) 00:27:15.508 Deallocate: Supported 00:27:15.508 Deallocated/Unwritten Error: Not Supported 00:27:15.508 Deallocated Read Value: Unknown 00:27:15.508 Deallocate in Write Zeroes: Not Supported 00:27:15.508 Deallocated Guard Field: 0xFFFF 00:27:15.508 Flush: Supported 00:27:15.508 Reservation: Supported 00:27:15.508 Namespace Sharing Capabilities: Multiple Controllers 00:27:15.508 Size (in LBAs): 131072 (0GiB) 00:27:15.508 Capacity (in LBAs): 131072 (0GiB) 00:27:15.508 Utilization (in LBAs): 131072 (0GiB) 00:27:15.508 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:15.508 EUI64: ABCDEF0123456789 00:27:15.508 UUID: 987b9555-90d6-43ea-b22c-6c0c13c69cf5 00:27:15.508 Thin Provisioning: Not Supported 00:27:15.508 Per-NS Atomic Units: Yes 00:27:15.508 Atomic Boundary Size (Normal): 0 00:27:15.508 Atomic Boundary Size (PFail): 0 00:27:15.508 Atomic Boundary Offset: 0 00:27:15.508 Maximum Single Source Range Length: 65535 00:27:15.508 Maximum Copy Length: 65535 00:27:15.508 Maximum Source Range Count: 1 00:27:15.508 NGUID/EUI64 Never Reused: No 00:27:15.508 Namespace Write Protected: No 00:27:15.508 Number of LBA Formats: 1 00:27:15.508 Current LBA Format: LBA Format #00 00:27:15.508 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:15.508 00:27:15.508 22:54:00 -- host/identify.sh@51 -- # sync 00:27:15.508 22:54:00 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:15.508 22:54:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:15.508 22:54:00 -- common/autotest_common.sh@10 -- # set +x 00:27:15.508 22:54:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:15.508 22:54:00 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:15.508 22:54:00 -- host/identify.sh@56 -- # nvmftestfini 00:27:15.508 22:54:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:15.508 22:54:00 -- nvmf/common.sh@116 -- # sync 00:27:15.508 22:54:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:15.508 22:54:00 -- nvmf/common.sh@119 -- # set +e 00:27:15.508 22:54:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:15.508 22:54:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:15.508 rmmod nvme_tcp 00:27:15.770 rmmod nvme_fabrics 00:27:15.770 rmmod nvme_keyring 00:27:15.770 22:54:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:15.770 22:54:00 -- nvmf/common.sh@123 -- # set -e 00:27:15.770 22:54:00 -- nvmf/common.sh@124 -- # return 0 00:27:15.770 22:54:00 -- nvmf/common.sh@477 -- # '[' -n 1257575 ']' 00:27:15.770 22:54:00 -- nvmf/common.sh@478 -- # killprocess 1257575 00:27:15.770 22:54:00 -- common/autotest_common.sh@926 -- # '[' -z 1257575 ']' 00:27:15.770 22:54:00 -- common/autotest_common.sh@930 -- # kill -0 1257575 00:27:15.770 22:54:00 -- common/autotest_common.sh@931 -- # uname 00:27:15.770 22:54:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:15.770 22:54:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1257575 00:27:15.770 22:54:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:15.770 22:54:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:15.770 22:54:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1257575' 00:27:15.770 killing process with pid 1257575 00:27:15.770 22:54:00 -- common/autotest_common.sh@945 -- # kill 1257575 00:27:15.770 [2024-04-15 22:54:00.424034] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:15.770 22:54:00 -- common/autotest_common.sh@950 -- # wait 1257575 00:27:15.770 22:54:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:15.770 22:54:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:15.770 22:54:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:15.770 22:54:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:15.770 22:54:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:15.770 22:54:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.770 22:54:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:15.770 22:54:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.352 22:54:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:18.352 00:27:18.352 real 0m11.654s 00:27:18.352 user 0m8.022s 00:27:18.352 sys 0m6.135s 00:27:18.352 22:54:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:18.352 22:54:02 -- common/autotest_common.sh@10 -- # set +x 00:27:18.352 ************************************ 00:27:18.352 END TEST nvmf_identify 00:27:18.352 ************************************ 00:27:18.352 22:54:02 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:18.352 22:54:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:18.352 22:54:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:18.352 22:54:02 -- common/autotest_common.sh@10 -- # set +x 00:27:18.352 ************************************ 00:27:18.352 START TEST nvmf_perf 00:27:18.352 ************************************ 00:27:18.352 22:54:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:18.352 * Looking for test storage... 00:27:18.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:18.352 22:54:02 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:18.352 22:54:02 -- nvmf/common.sh@7 -- # uname -s 00:27:18.352 22:54:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.352 22:54:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.352 22:54:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.352 22:54:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.352 22:54:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.352 22:54:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.352 22:54:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.352 22:54:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.353 22:54:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.353 22:54:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.353 22:54:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:18.353 22:54:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:18.353 22:54:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.353 22:54:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.353 22:54:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:18.353 22:54:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:18.353 22:54:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.353 22:54:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.353 22:54:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.353 22:54:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.353 22:54:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.353 22:54:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.353 22:54:02 -- paths/export.sh@5 -- # export PATH 00:27:18.353 22:54:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.353 22:54:02 -- nvmf/common.sh@46 -- # : 0 00:27:18.353 22:54:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:18.353 22:54:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:18.353 22:54:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:18.353 22:54:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.353 22:54:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.353 22:54:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:18.353 22:54:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:18.353 22:54:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:18.353 22:54:02 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:18.353 22:54:02 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:18.353 22:54:02 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:18.353 22:54:02 -- host/perf.sh@17 -- # nvmftestinit 00:27:18.353 22:54:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:18.353 22:54:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.353 22:54:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:18.353 22:54:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:18.353 22:54:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:18.353 22:54:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.353 22:54:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.353 22:54:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.353 22:54:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:18.353 22:54:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:18.353 22:54:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:18.353 22:54:02 -- common/autotest_common.sh@10 -- # set +x 00:27:26.494 22:54:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:26.494 22:54:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:26.494 22:54:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:26.494 22:54:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:26.494 22:54:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:26.494 22:54:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:26.494 22:54:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:26.494 22:54:10 -- nvmf/common.sh@294 -- # net_devs=() 00:27:26.494 22:54:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:26.494 22:54:10 -- nvmf/common.sh@295 -- # e810=() 00:27:26.494 22:54:10 -- nvmf/common.sh@295 -- # local -ga e810 00:27:26.494 22:54:10 -- nvmf/common.sh@296 -- # x722=() 00:27:26.494 22:54:10 -- nvmf/common.sh@296 -- # local -ga x722 00:27:26.494 22:54:10 -- nvmf/common.sh@297 -- # mlx=() 00:27:26.494 22:54:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:26.494 22:54:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.494 22:54:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.494 22:54:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.494 22:54:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.494 22:54:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.494 22:54:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.494 22:54:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.494 22:54:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.494 22:54:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.494 22:54:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.494 22:54:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.494 22:54:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:26.494 22:54:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:26.494 22:54:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:26.494 22:54:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:26.494 22:54:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:26.494 22:54:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:26.494 22:54:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:26.494 22:54:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:26.494 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:26.494 22:54:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:26.494 22:54:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:26.494 22:54:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.494 22:54:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.494 22:54:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:26.494 22:54:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:26.494 22:54:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:26.494 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:26.494 22:54:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:26.494 22:54:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:26.494 22:54:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.494 22:54:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.494 22:54:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:26.494 22:54:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:26.494 22:54:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:26.494 22:54:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:26.494 22:54:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:26.494 22:54:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.494 22:54:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:26.494 22:54:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.494 22:54:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:26.494 Found net devices under 0000:31:00.0: cvl_0_0 00:27:26.494 22:54:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.494 22:54:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:26.494 22:54:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.494 22:54:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:26.494 22:54:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.494 22:54:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:26.494 Found net devices under 0000:31:00.1: cvl_0_1 00:27:26.494 22:54:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.494 22:54:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:26.494 22:54:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:26.494 22:54:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:26.494 22:54:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:26.494 22:54:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:26.494 22:54:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.494 22:54:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.494 22:54:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.494 22:54:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:26.494 22:54:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:26.494 22:54:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:26.494 22:54:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:26.494 22:54:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:26.494 22:54:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.494 22:54:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:26.494 22:54:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:26.494 22:54:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:26.494 22:54:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:26.494 22:54:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:26.494 22:54:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.495 22:54:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:26.495 22:54:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.495 22:54:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:26.495 22:54:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:26.495 22:54:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:26.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:27:26.495 00:27:26.495 --- 10.0.0.2 ping statistics --- 00:27:26.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.495 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:27:26.495 22:54:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:26.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:27:26.495 00:27:26.495 --- 10.0.0.1 ping statistics --- 00:27:26.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.495 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:27:26.495 22:54:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.495 22:54:11 -- nvmf/common.sh@410 -- # return 0 00:27:26.495 22:54:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:26.495 22:54:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.495 22:54:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:26.495 22:54:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:26.495 22:54:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.495 22:54:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:26.495 22:54:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:26.495 22:54:11 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:26.495 22:54:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:26.495 22:54:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:26.495 22:54:11 -- common/autotest_common.sh@10 -- # set +x 00:27:26.495 22:54:11 -- nvmf/common.sh@469 -- # nvmfpid=1263191 00:27:26.495 22:54:11 -- nvmf/common.sh@470 -- # waitforlisten 1263191 00:27:26.495 22:54:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:26.495 22:54:11 -- common/autotest_common.sh@819 -- # '[' -z 1263191 ']' 00:27:26.495 22:54:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.495 22:54:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:26.495 22:54:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.495 22:54:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:26.495 22:54:11 -- common/autotest_common.sh@10 -- # set +x 00:27:26.495 [2024-04-15 22:54:11.176761] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:27:26.495 [2024-04-15 22:54:11.176809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.495 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.495 [2024-04-15 22:54:11.249901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:26.755 [2024-04-15 22:54:11.313330] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:26.755 [2024-04-15 22:54:11.313464] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.755 [2024-04-15 22:54:11.313474] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.755 [2024-04-15 22:54:11.313483] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.755 [2024-04-15 22:54:11.313529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.755 [2024-04-15 22:54:11.313708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:26.755 [2024-04-15 22:54:11.313757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.755 [2024-04-15 22:54:11.313758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:27.326 22:54:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:27.326 22:54:11 -- common/autotest_common.sh@852 -- # return 0 00:27:27.326 22:54:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:27.326 22:54:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:27.326 22:54:11 -- common/autotest_common.sh@10 -- # set +x 00:27:27.326 22:54:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:27.326 22:54:11 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:27.326 22:54:11 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:27.925 22:54:12 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:27.925 22:54:12 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:27.925 22:54:12 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:27:27.925 22:54:12 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:28.185 22:54:12 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:28.185 22:54:12 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:27:28.185 22:54:12 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:28.185 22:54:12 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:28.185 22:54:12 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:28.185 [2024-04-15 22:54:12.952936] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.185 22:54:12 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:28.446 22:54:13 -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:28.446 22:54:13 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:28.706 22:54:13 -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:28.706 22:54:13 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:28.706 22:54:13 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:28.967 [2024-04-15 22:54:13.611475] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.967 22:54:13 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:29.228 22:54:13 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:27:29.228 22:54:13 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:27:29.228 22:54:13 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:29.228 22:54:13 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:27:30.611 Initializing NVMe Controllers 00:27:30.611 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:27:30.611 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:27:30.611 Initialization complete. Launching workers. 00:27:30.611 ======================================================== 00:27:30.611 Latency(us) 00:27:30.611 Device Information : IOPS MiB/s Average min max 00:27:30.611 PCIE (0000:65:00.0) NSID 1 from core 0: 80945.76 316.19 394.63 13.24 5206.15 00:27:30.611 ======================================================== 00:27:30.611 Total : 80945.76 316.19 394.63 13.24 5206.15 00:27:30.611 00:27:30.611 22:54:15 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:30.612 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.995 Initializing NVMe Controllers 00:27:31.995 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:31.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:31.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:31.995 Initialization complete. Launching workers. 00:27:31.995 ======================================================== 00:27:31.995 Latency(us) 00:27:31.995 Device Information : IOPS MiB/s Average min max 00:27:31.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 157.00 0.61 6608.37 116.92 45711.55 00:27:31.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 45.00 0.18 23056.55 7911.00 47897.70 00:27:31.995 ======================================================== 00:27:31.995 Total : 202.00 0.79 10272.57 116.92 47897.70 00:27:31.995 00:27:31.995 22:54:16 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:31.995 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.379 Initializing NVMe Controllers 00:27:33.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:33.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:33.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:33.379 Initialization complete. Launching workers. 00:27:33.379 ======================================================== 00:27:33.379 Latency(us) 00:27:33.379 Device Information : IOPS MiB/s Average min max 00:27:33.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10335.78 40.37 3100.18 411.79 6616.71 00:27:33.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3895.92 15.22 8250.28 5286.82 17052.88 00:27:33.379 ======================================================== 00:27:33.379 Total : 14231.70 55.59 4510.02 411.79 17052.88 00:27:33.379 00:27:33.379 22:54:17 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:33.379 22:54:17 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:33.379 22:54:17 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:33.379 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.923 Initializing NVMe Controllers 00:27:35.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:35.923 Controller IO queue size 128, less than required. 00:27:35.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:35.923 Controller IO queue size 128, less than required. 00:27:35.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:35.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:35.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:35.923 Initialization complete. Launching workers. 00:27:35.923 ======================================================== 00:27:35.923 Latency(us) 00:27:35.923 Device Information : IOPS MiB/s Average min max 00:27:35.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1087.19 271.80 120206.13 66060.60 169500.13 00:27:35.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 586.30 146.57 227722.16 62341.36 311766.14 00:27:35.923 ======================================================== 00:27:35.923 Total : 1673.49 418.37 157873.64 62341.36 311766.14 00:27:35.923 00:27:35.923 22:54:20 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:35.923 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.923 No valid NVMe controllers or AIO or URING devices found 00:27:35.923 Initializing NVMe Controllers 00:27:35.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:35.923 Controller IO queue size 128, less than required. 00:27:35.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:35.923 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:35.923 Controller IO queue size 128, less than required. 00:27:35.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:35.923 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:35.923 WARNING: Some requested NVMe devices were skipped 00:27:35.923 22:54:20 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:35.923 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.468 Initializing NVMe Controllers 00:27:38.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:38.468 Controller IO queue size 128, less than required. 00:27:38.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:38.468 Controller IO queue size 128, less than required. 00:27:38.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:38.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:38.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:38.468 Initialization complete. Launching workers. 00:27:38.468 00:27:38.468 ==================== 00:27:38.468 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:38.468 TCP transport: 00:27:38.468 polls: 31545 00:27:38.468 idle_polls: 12060 00:27:38.468 sock_completions: 19485 00:27:38.468 nvme_completions: 4068 00:27:38.468 submitted_requests: 6302 00:27:38.468 queued_requests: 1 00:27:38.468 00:27:38.468 ==================== 00:27:38.468 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:38.468 TCP transport: 00:27:38.468 polls: 31749 00:27:38.468 idle_polls: 12175 00:27:38.468 sock_completions: 19574 00:27:38.468 nvme_completions: 4194 00:27:38.468 submitted_requests: 6504 00:27:38.468 queued_requests: 1 00:27:38.468 ======================================================== 00:27:38.468 Latency(us) 00:27:38.468 Device Information : IOPS MiB/s Average min max 00:27:38.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1080.28 270.07 121814.49 73310.74 196437.15 00:27:38.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1111.77 277.94 118452.51 47041.00 190873.83 00:27:38.468 ======================================================== 00:27:38.468 Total : 2192.05 548.01 120109.35 47041.00 196437.15 00:27:38.468 00:27:38.468 22:54:23 -- host/perf.sh@66 -- # sync 00:27:38.468 22:54:23 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:38.468 22:54:23 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:38.468 22:54:23 -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:27:38.468 22:54:23 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:39.853 22:54:24 -- host/perf.sh@72 -- # ls_guid=55b956dc-168f-412c-be10-7be261200558 00:27:39.853 22:54:24 -- host/perf.sh@73 -- # get_lvs_free_mb 55b956dc-168f-412c-be10-7be261200558 00:27:39.853 22:54:24 -- common/autotest_common.sh@1343 -- # local lvs_uuid=55b956dc-168f-412c-be10-7be261200558 00:27:39.853 22:54:24 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:39.853 22:54:24 -- common/autotest_common.sh@1345 -- # local fc 00:27:39.853 22:54:24 -- common/autotest_common.sh@1346 -- # local cs 00:27:39.853 22:54:24 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:39.853 22:54:24 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:39.853 { 00:27:39.853 "uuid": "55b956dc-168f-412c-be10-7be261200558", 00:27:39.853 "name": "lvs_0", 00:27:39.853 "base_bdev": "Nvme0n1", 00:27:39.853 "total_data_clusters": 457407, 00:27:39.853 "free_clusters": 457407, 00:27:39.853 "block_size": 512, 00:27:39.853 "cluster_size": 4194304 00:27:39.853 } 00:27:39.853 ]' 00:27:39.853 22:54:24 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="55b956dc-168f-412c-be10-7be261200558") .free_clusters' 00:27:39.853 22:54:24 -- common/autotest_common.sh@1348 -- # fc=457407 00:27:39.853 22:54:24 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="55b956dc-168f-412c-be10-7be261200558") .cluster_size' 00:27:39.853 22:54:24 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:39.853 22:54:24 -- common/autotest_common.sh@1352 -- # free_mb=1829628 00:27:39.853 22:54:24 -- common/autotest_common.sh@1353 -- # echo 1829628 00:27:39.853 1829628 00:27:39.853 22:54:24 -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:27:39.853 22:54:24 -- host/perf.sh@78 -- # free_mb=20480 00:27:39.853 22:54:24 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 55b956dc-168f-412c-be10-7be261200558 lbd_0 20480 00:27:40.114 22:54:24 -- host/perf.sh@80 -- # lb_guid=de1a6452-7e56-47a0-a2c7-2070eeacc6b4 00:27:40.114 22:54:24 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore de1a6452-7e56-47a0-a2c7-2070eeacc6b4 lvs_n_0 00:27:41.547 22:54:26 -- host/perf.sh@83 -- # ls_nested_guid=6cd411fd-226f-41e0-9a64-12fd46e11060 00:27:41.547 22:54:26 -- host/perf.sh@84 -- # get_lvs_free_mb 6cd411fd-226f-41e0-9a64-12fd46e11060 00:27:41.547 22:54:26 -- common/autotest_common.sh@1343 -- # local lvs_uuid=6cd411fd-226f-41e0-9a64-12fd46e11060 00:27:41.547 22:54:26 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:41.547 22:54:26 -- common/autotest_common.sh@1345 -- # local fc 00:27:41.547 22:54:26 -- common/autotest_common.sh@1346 -- # local cs 00:27:41.547 22:54:26 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:41.808 22:54:26 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:41.808 { 00:27:41.808 "uuid": "55b956dc-168f-412c-be10-7be261200558", 00:27:41.808 "name": "lvs_0", 00:27:41.808 "base_bdev": "Nvme0n1", 00:27:41.808 "total_data_clusters": 457407, 00:27:41.808 "free_clusters": 452287, 00:27:41.808 "block_size": 512, 00:27:41.808 "cluster_size": 4194304 00:27:41.808 }, 00:27:41.808 { 00:27:41.808 "uuid": "6cd411fd-226f-41e0-9a64-12fd46e11060", 00:27:41.808 "name": "lvs_n_0", 00:27:41.808 "base_bdev": "de1a6452-7e56-47a0-a2c7-2070eeacc6b4", 00:27:41.808 "total_data_clusters": 5114, 00:27:41.808 "free_clusters": 5114, 00:27:41.808 "block_size": 512, 00:27:41.808 "cluster_size": 4194304 00:27:41.808 } 00:27:41.808 ]' 00:27:41.808 22:54:26 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="6cd411fd-226f-41e0-9a64-12fd46e11060") .free_clusters' 00:27:41.808 22:54:26 -- common/autotest_common.sh@1348 -- # fc=5114 00:27:41.808 22:54:26 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="6cd411fd-226f-41e0-9a64-12fd46e11060") .cluster_size' 00:27:41.808 22:54:26 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:41.808 22:54:26 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:27:41.808 22:54:26 -- common/autotest_common.sh@1353 -- # echo 20456 00:27:41.808 20456 00:27:41.808 22:54:26 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:41.808 22:54:26 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6cd411fd-226f-41e0-9a64-12fd46e11060 lbd_nest_0 20456 00:27:42.069 22:54:26 -- host/perf.sh@88 -- # lb_nested_guid=a53cc92f-e61c-416f-b846-b7e03a91c604 00:27:42.069 22:54:26 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:42.329 22:54:26 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:42.329 22:54:26 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 a53cc92f-e61c-416f-b846-b7e03a91c604 00:27:42.329 22:54:27 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.590 22:54:27 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:42.590 22:54:27 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:42.590 22:54:27 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:42.590 22:54:27 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:42.590 22:54:27 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:42.590 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.824 Initializing NVMe Controllers 00:27:54.824 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:54.824 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:54.824 Initialization complete. Launching workers. 00:27:54.824 ======================================================== 00:27:54.824 Latency(us) 00:27:54.824 Device Information : IOPS MiB/s Average min max 00:27:54.824 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.40 0.02 21161.40 186.87 45685.96 00:27:54.824 ======================================================== 00:27:54.824 Total : 47.40 0.02 21161.40 186.87 45685.96 00:27:54.824 00:27:54.824 22:54:37 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:54.824 22:54:37 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:54.824 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.875 Initializing NVMe Controllers 00:28:04.875 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:04.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:04.875 Initialization complete. Launching workers. 00:28:04.875 ======================================================== 00:28:04.875 Latency(us) 00:28:04.875 Device Information : IOPS MiB/s Average min max 00:28:04.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 67.50 8.44 14821.20 6985.37 47889.80 00:28:04.875 ======================================================== 00:28:04.875 Total : 67.50 8.44 14821.20 6985.37 47889.80 00:28:04.875 00:28:04.875 22:54:48 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:04.875 22:54:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:04.875 22:54:48 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:04.875 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.876 Initializing NVMe Controllers 00:28:14.876 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:14.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:14.876 Initialization complete. Launching workers. 00:28:14.876 ======================================================== 00:28:14.876 Latency(us) 00:28:14.876 Device Information : IOPS MiB/s Average min max 00:28:14.876 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9020.29 4.40 3547.98 315.25 6972.57 00:28:14.876 ======================================================== 00:28:14.876 Total : 9020.29 4.40 3547.98 315.25 6972.57 00:28:14.876 00:28:14.876 22:54:58 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:14.876 22:54:58 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:14.876 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.879 Initializing NVMe Controllers 00:28:24.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:24.879 Initialization complete. Launching workers. 00:28:24.879 ======================================================== 00:28:24.879 Latency(us) 00:28:24.879 Device Information : IOPS MiB/s Average min max 00:28:24.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3347.79 418.47 9559.24 654.32 25020.46 00:28:24.879 ======================================================== 00:28:24.879 Total : 3347.79 418.47 9559.24 654.32 25020.46 00:28:24.879 00:28:24.879 22:55:08 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:24.879 22:55:08 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:24.879 22:55:08 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:24.879 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.883 Initializing NVMe Controllers 00:28:34.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:34.883 Controller IO queue size 128, less than required. 00:28:34.883 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:34.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:34.883 Initialization complete. Launching workers. 00:28:34.883 ======================================================== 00:28:34.883 Latency(us) 00:28:34.883 Device Information : IOPS MiB/s Average min max 00:28:34.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12508.40 6.11 10239.19 1716.16 22787.54 00:28:34.883 ======================================================== 00:28:34.883 Total : 12508.40 6.11 10239.19 1716.16 22787.54 00:28:34.883 00:28:34.883 22:55:18 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:34.883 22:55:18 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:34.883 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.890 Initializing NVMe Controllers 00:28:44.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:44.890 Controller IO queue size 128, less than required. 00:28:44.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:44.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:44.890 Initialization complete. Launching workers. 00:28:44.890 ======================================================== 00:28:44.890 Latency(us) 00:28:44.890 Device Information : IOPS MiB/s Average min max 00:28:44.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1166.08 145.76 110285.59 24299.15 230580.79 00:28:44.890 ======================================================== 00:28:44.890 Total : 1166.08 145.76 110285.59 24299.15 230580.79 00:28:44.890 00:28:44.890 22:55:29 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:44.890 22:55:29 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a53cc92f-e61c-416f-b846-b7e03a91c604 00:28:46.276 22:55:30 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:46.536 22:55:31 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete de1a6452-7e56-47a0-a2c7-2070eeacc6b4 00:28:46.797 22:55:31 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:46.797 22:55:31 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:46.797 22:55:31 -- host/perf.sh@114 -- # nvmftestfini 00:28:46.797 22:55:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:46.797 22:55:31 -- nvmf/common.sh@116 -- # sync 00:28:46.797 22:55:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:46.797 22:55:31 -- nvmf/common.sh@119 -- # set +e 00:28:46.797 22:55:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:46.797 22:55:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:46.797 rmmod nvme_tcp 00:28:46.797 rmmod nvme_fabrics 00:28:46.797 rmmod nvme_keyring 00:28:46.797 22:55:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:46.797 22:55:31 -- nvmf/common.sh@123 -- # set -e 00:28:46.797 22:55:31 -- nvmf/common.sh@124 -- # return 0 00:28:46.797 22:55:31 -- nvmf/common.sh@477 -- # '[' -n 1263191 ']' 00:28:46.797 22:55:31 -- nvmf/common.sh@478 -- # killprocess 1263191 00:28:46.797 22:55:31 -- common/autotest_common.sh@926 -- # '[' -z 1263191 ']' 00:28:46.797 22:55:31 -- common/autotest_common.sh@930 -- # kill -0 1263191 00:28:46.797 22:55:31 -- common/autotest_common.sh@931 -- # uname 00:28:46.797 22:55:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:46.797 22:55:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1263191 00:28:47.058 22:55:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:47.058 22:55:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:47.058 22:55:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1263191' 00:28:47.058 killing process with pid 1263191 00:28:47.058 22:55:31 -- common/autotest_common.sh@945 -- # kill 1263191 00:28:47.058 22:55:31 -- common/autotest_common.sh@950 -- # wait 1263191 00:28:49.030 22:55:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:49.030 22:55:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:49.030 22:55:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:49.030 22:55:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:49.030 22:55:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:49.030 22:55:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.030 22:55:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:49.030 22:55:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.944 22:55:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:50.944 00:28:50.944 real 1m32.987s 00:28:50.944 user 5m24.929s 00:28:50.944 sys 0m15.019s 00:28:50.944 22:55:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:50.944 22:55:35 -- common/autotest_common.sh@10 -- # set +x 00:28:50.944 ************************************ 00:28:50.944 END TEST nvmf_perf 00:28:50.944 ************************************ 00:28:50.944 22:55:35 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:50.944 22:55:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:50.944 22:55:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:50.944 22:55:35 -- common/autotest_common.sh@10 -- # set +x 00:28:50.944 ************************************ 00:28:50.944 START TEST nvmf_fio_host 00:28:50.944 ************************************ 00:28:50.944 22:55:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:51.206 * Looking for test storage... 00:28:51.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:51.206 22:55:35 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:51.206 22:55:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.206 22:55:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.206 22:55:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.206 22:55:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.206 22:55:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.206 22:55:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.206 22:55:35 -- paths/export.sh@5 -- # export PATH 00:28:51.206 22:55:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.206 22:55:35 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:51.206 22:55:35 -- nvmf/common.sh@7 -- # uname -s 00:28:51.206 22:55:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.206 22:55:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.206 22:55:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.206 22:55:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.206 22:55:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.206 22:55:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.206 22:55:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.206 22:55:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.206 22:55:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.206 22:55:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.206 22:55:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:51.206 22:55:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:51.206 22:55:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.206 22:55:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.206 22:55:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:51.206 22:55:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:51.206 22:55:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.206 22:55:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.206 22:55:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.206 22:55:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.206 22:55:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.206 22:55:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.206 22:55:35 -- paths/export.sh@5 -- # export PATH 00:28:51.206 22:55:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.206 22:55:35 -- nvmf/common.sh@46 -- # : 0 00:28:51.206 22:55:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:51.206 22:55:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:51.206 22:55:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:51.206 22:55:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.206 22:55:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.206 22:55:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:51.206 22:55:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:51.206 22:55:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:51.206 22:55:35 -- host/fio.sh@12 -- # nvmftestinit 00:28:51.206 22:55:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:51.206 22:55:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.206 22:55:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:51.206 22:55:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:51.206 22:55:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:51.206 22:55:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.206 22:55:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:51.206 22:55:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.206 22:55:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:51.206 22:55:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:51.206 22:55:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:51.206 22:55:35 -- common/autotest_common.sh@10 -- # set +x 00:28:59.416 22:55:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:59.416 22:55:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:59.416 22:55:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:59.416 22:55:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:59.416 22:55:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:59.416 22:55:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:59.416 22:55:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:59.416 22:55:43 -- nvmf/common.sh@294 -- # net_devs=() 00:28:59.416 22:55:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:59.416 22:55:43 -- nvmf/common.sh@295 -- # e810=() 00:28:59.416 22:55:43 -- nvmf/common.sh@295 -- # local -ga e810 00:28:59.416 22:55:43 -- nvmf/common.sh@296 -- # x722=() 00:28:59.416 22:55:43 -- nvmf/common.sh@296 -- # local -ga x722 00:28:59.416 22:55:43 -- nvmf/common.sh@297 -- # mlx=() 00:28:59.416 22:55:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:59.416 22:55:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.416 22:55:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.416 22:55:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.416 22:55:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.416 22:55:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.416 22:55:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.416 22:55:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.416 22:55:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.416 22:55:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.416 22:55:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.416 22:55:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.416 22:55:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:59.416 22:55:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:59.416 22:55:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:59.416 22:55:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:59.416 22:55:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:59.416 22:55:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:59.416 22:55:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:59.416 22:55:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:59.416 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:59.416 22:55:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:59.416 22:55:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:59.416 22:55:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.416 22:55:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.416 22:55:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:59.416 22:55:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:59.416 22:55:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:59.416 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:59.416 22:55:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:59.416 22:55:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:59.416 22:55:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.416 22:55:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.416 22:55:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:59.416 22:55:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:59.416 22:55:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:59.416 22:55:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:59.416 22:55:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:59.416 22:55:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.416 22:55:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:59.416 22:55:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.416 22:55:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:59.416 Found net devices under 0000:31:00.0: cvl_0_0 00:28:59.416 22:55:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.416 22:55:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:59.416 22:55:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.416 22:55:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:59.416 22:55:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.416 22:55:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:59.416 Found net devices under 0000:31:00.1: cvl_0_1 00:28:59.416 22:55:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.416 22:55:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:59.416 22:55:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:59.416 22:55:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:59.416 22:55:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:59.416 22:55:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:59.416 22:55:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.416 22:55:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.416 22:55:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.416 22:55:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:59.416 22:55:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.416 22:55:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.416 22:55:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:59.416 22:55:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.416 22:55:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.416 22:55:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:59.416 22:55:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:59.416 22:55:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.416 22:55:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.416 22:55:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.416 22:55:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.416 22:55:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:59.416 22:55:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.416 22:55:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.416 22:55:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.416 22:55:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:59.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:28:59.416 00:28:59.416 --- 10.0.0.2 ping statistics --- 00:28:59.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.416 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:28:59.416 22:55:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:28:59.416 00:28:59.416 --- 10.0.0.1 ping statistics --- 00:28:59.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.416 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:28:59.416 22:55:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.416 22:55:43 -- nvmf/common.sh@410 -- # return 0 00:28:59.416 22:55:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:59.416 22:55:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.416 22:55:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:59.416 22:55:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:59.416 22:55:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.417 22:55:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:59.417 22:55:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:59.417 22:55:43 -- host/fio.sh@14 -- # [[ y != y ]] 00:28:59.417 22:55:43 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:28:59.417 22:55:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:59.417 22:55:43 -- common/autotest_common.sh@10 -- # set +x 00:28:59.417 22:55:43 -- host/fio.sh@22 -- # nvmfpid=1283744 00:28:59.417 22:55:43 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:59.417 22:55:43 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:59.417 22:55:43 -- host/fio.sh@26 -- # waitforlisten 1283744 00:28:59.417 22:55:43 -- common/autotest_common.sh@819 -- # '[' -z 1283744 ']' 00:28:59.417 22:55:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.417 22:55:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:59.417 22:55:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.417 22:55:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:59.417 22:55:43 -- common/autotest_common.sh@10 -- # set +x 00:28:59.417 [2024-04-15 22:55:44.034889] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:28:59.417 [2024-04-15 22:55:44.034956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.417 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.417 [2024-04-15 22:55:44.114458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:59.417 [2024-04-15 22:55:44.187852] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:59.417 [2024-04-15 22:55:44.187990] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.417 [2024-04-15 22:55:44.188000] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.417 [2024-04-15 22:55:44.188009] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.417 [2024-04-15 22:55:44.188124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.417 [2024-04-15 22:55:44.188242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.417 [2024-04-15 22:55:44.188401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.417 [2024-04-15 22:55:44.188401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:00.359 22:55:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:00.359 22:55:44 -- common/autotest_common.sh@852 -- # return 0 00:29:00.359 22:55:44 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:00.359 22:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:00.359 22:55:44 -- common/autotest_common.sh@10 -- # set +x 00:29:00.359 [2024-04-15 22:55:44.823594] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.359 22:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:00.359 22:55:44 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:29:00.359 22:55:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:00.359 22:55:44 -- common/autotest_common.sh@10 -- # set +x 00:29:00.359 22:55:44 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:00.359 22:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:00.359 22:55:44 -- common/autotest_common.sh@10 -- # set +x 00:29:00.359 Malloc1 00:29:00.359 22:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:00.359 22:55:44 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:00.359 22:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:00.359 22:55:44 -- common/autotest_common.sh@10 -- # set +x 00:29:00.359 22:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:00.359 22:55:44 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:00.359 22:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:00.359 22:55:44 -- common/autotest_common.sh@10 -- # set +x 00:29:00.359 22:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:00.359 22:55:44 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:00.359 22:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:00.359 22:55:44 -- common/autotest_common.sh@10 -- # set +x 00:29:00.359 [2024-04-15 22:55:44.920663] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.359 22:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:00.359 22:55:44 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:00.359 22:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:00.359 22:55:44 -- common/autotest_common.sh@10 -- # set +x 00:29:00.359 22:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:00.359 22:55:44 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:00.359 22:55:44 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:00.359 22:55:44 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:00.359 22:55:44 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:00.359 22:55:44 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:00.359 22:55:44 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:00.359 22:55:44 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:00.359 22:55:44 -- common/autotest_common.sh@1320 -- # shift 00:29:00.359 22:55:44 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:00.359 22:55:44 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:00.359 22:55:44 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:00.359 22:55:44 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:00.359 22:55:44 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:00.359 22:55:44 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:00.359 22:55:44 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:00.359 22:55:44 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:00.360 22:55:44 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:00.360 22:55:44 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:00.360 22:55:44 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:00.360 22:55:44 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:00.360 22:55:44 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:00.360 22:55:44 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:00.360 22:55:44 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:00.621 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:00.621 fio-3.35 00:29:00.621 Starting 1 thread 00:29:00.621 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.195 00:29:03.195 test: (groupid=0, jobs=1): err= 0: pid=1284165: Mon Apr 15 22:55:47 2024 00:29:03.195 read: IOPS=12.4k, BW=48.6MiB/s (51.0MB/s)(97.4MiB/2004msec) 00:29:03.195 slat (usec): min=2, max=291, avg= 2.18, stdev= 2.63 00:29:03.195 clat (usec): min=3328, max=9108, avg=5696.92, stdev=1119.78 00:29:03.196 lat (usec): min=3362, max=9121, avg=5699.10, stdev=1119.84 00:29:03.196 clat percentiles (usec): 00:29:03.196 | 1.00th=[ 4293], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4817], 00:29:03.196 | 30.00th=[ 4948], 40.00th=[ 5080], 50.00th=[ 5211], 60.00th=[ 5342], 00:29:03.196 | 70.00th=[ 6325], 80.00th=[ 7046], 90.00th=[ 7504], 95.00th=[ 7767], 00:29:03.196 | 99.00th=[ 8291], 99.50th=[ 8455], 99.90th=[ 8586], 99.95th=[ 8717], 00:29:03.196 | 99.99th=[ 9110] 00:29:03.196 bw ( KiB/s): min=38512, max=56960, per=99.92%, avg=49730.00, stdev=8757.82, samples=4 00:29:03.196 iops : min= 9628, max=14240, avg=12432.50, stdev=2189.45, samples=4 00:29:03.196 write: IOPS=12.4k, BW=48.6MiB/s (50.9MB/s)(97.3MiB/2004msec); 0 zone resets 00:29:03.196 slat (usec): min=2, max=304, avg= 2.28, stdev= 2.12 00:29:03.196 clat (usec): min=2919, max=7718, avg=4568.52, stdev=878.50 00:29:03.196 lat (usec): min=2936, max=7752, avg=4570.80, stdev=878.61 00:29:03.196 clat percentiles (usec): 00:29:03.196 | 1.00th=[ 3425], 5.00th=[ 3621], 10.00th=[ 3720], 20.00th=[ 3884], 00:29:03.196 | 30.00th=[ 3982], 40.00th=[ 4080], 50.00th=[ 4178], 60.00th=[ 4359], 00:29:03.196 | 70.00th=[ 5014], 80.00th=[ 5604], 90.00th=[ 5997], 95.00th=[ 6194], 00:29:03.196 | 99.00th=[ 6587], 99.50th=[ 6718], 99.90th=[ 7046], 99.95th=[ 7242], 00:29:03.196 | 99.99th=[ 7701] 00:29:03.196 bw ( KiB/s): min=39080, max=56936, per=99.97%, avg=49706.00, stdev=8614.82, samples=4 00:29:03.196 iops : min= 9770, max=14234, avg=12426.50, stdev=2153.70, samples=4 00:29:03.196 lat (msec) : 4=16.27%, 10=83.73% 00:29:03.196 cpu : usr=68.20%, sys=27.71%, ctx=68, majf=0, minf=6 00:29:03.196 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:03.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:03.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:03.196 issued rwts: total=24934,24911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:03.196 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:03.196 00:29:03.196 Run status group 0 (all jobs): 00:29:03.196 READ: bw=48.6MiB/s (51.0MB/s), 48.6MiB/s-48.6MiB/s (51.0MB/s-51.0MB/s), io=97.4MiB (102MB), run=2004-2004msec 00:29:03.196 WRITE: bw=48.6MiB/s (50.9MB/s), 48.6MiB/s-48.6MiB/s (50.9MB/s-50.9MB/s), io=97.3MiB (102MB), run=2004-2004msec 00:29:03.196 22:55:47 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:03.196 22:55:47 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:03.196 22:55:47 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:03.196 22:55:47 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:03.196 22:55:47 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:03.196 22:55:47 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:03.196 22:55:47 -- common/autotest_common.sh@1320 -- # shift 00:29:03.196 22:55:47 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:03.196 22:55:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:03.196 22:55:47 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:03.196 22:55:47 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:03.196 22:55:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:03.196 22:55:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:03.196 22:55:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:03.196 22:55:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:03.196 22:55:47 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:03.196 22:55:47 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:03.196 22:55:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:03.196 22:55:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:03.196 22:55:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:03.196 22:55:47 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:03.196 22:55:47 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:03.459 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:03.459 fio-3.35 00:29:03.459 Starting 1 thread 00:29:03.459 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.009 00:29:06.009 test: (groupid=0, jobs=1): err= 0: pid=1284878: Mon Apr 15 22:55:50 2024 00:29:06.009 read: IOPS=8884, BW=139MiB/s (146MB/s)(279MiB/2007msec) 00:29:06.009 slat (usec): min=3, max=114, avg= 3.67, stdev= 1.91 00:29:06.009 clat (usec): min=1061, max=50243, avg=9038.51, stdev=3650.62 00:29:06.009 lat (usec): min=1064, max=50247, avg=9042.18, stdev=3650.77 00:29:06.009 clat percentiles (usec): 00:29:06.009 | 1.00th=[ 4621], 5.00th=[ 5669], 10.00th=[ 6259], 20.00th=[ 6915], 00:29:06.009 | 30.00th=[ 7504], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9241], 00:29:06.009 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[12125], 95.00th=[12518], 00:29:06.009 | 99.00th=[15008], 99.50th=[47449], 99.90th=[49546], 99.95th=[50070], 00:29:06.009 | 99.99th=[50070] 00:29:06.009 bw ( KiB/s): min=59968, max=81024, per=49.51%, avg=70376.00, stdev=9554.00, samples=4 00:29:06.009 iops : min= 3748, max= 5064, avg=4398.50, stdev=597.13, samples=4 00:29:06.009 write: IOPS=5192, BW=81.1MiB/s (85.1MB/s)(143MiB/1763msec); 0 zone resets 00:29:06.009 slat (usec): min=39, max=443, avg=41.30, stdev= 9.36 00:29:06.009 clat (usec): min=2437, max=52652, avg=9579.18, stdev=2833.74 00:29:06.009 lat (usec): min=2477, max=52693, avg=9620.47, stdev=2835.32 00:29:06.009 clat percentiles (usec): 00:29:06.009 | 1.00th=[ 6587], 5.00th=[ 7308], 10.00th=[ 7701], 20.00th=[ 8225], 00:29:06.009 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9634], 00:29:06.009 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11469], 95.00th=[11994], 00:29:06.009 | 99.00th=[14615], 99.50th=[16909], 99.90th=[51643], 99.95th=[52167], 00:29:06.009 | 99.99th=[52691] 00:29:06.009 bw ( KiB/s): min=63104, max=84352, per=88.15%, avg=73240.00, stdev=9651.40, samples=4 00:29:06.009 iops : min= 3944, max= 5272, avg=4577.50, stdev=603.21, samples=4 00:29:06.009 lat (msec) : 2=0.01%, 4=0.36%, 10=70.33%, 20=28.82%, 50=0.36% 00:29:06.009 lat (msec) : 100=0.11% 00:29:06.009 cpu : usr=83.10%, sys=13.61%, ctx=16, majf=0, minf=19 00:29:06.009 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:29:06.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:06.009 issued rwts: total=17832,9155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.009 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:06.009 00:29:06.009 Run status group 0 (all jobs): 00:29:06.009 READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=279MiB (292MB), run=2007-2007msec 00:29:06.009 WRITE: bw=81.1MiB/s (85.1MB/s), 81.1MiB/s-81.1MiB/s (85.1MB/s-85.1MB/s), io=143MiB (150MB), run=1763-1763msec 00:29:06.009 22:55:50 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:06.009 22:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.009 22:55:50 -- common/autotest_common.sh@10 -- # set +x 00:29:06.009 22:55:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.009 22:55:50 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:29:06.009 22:55:50 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:29:06.009 22:55:50 -- host/fio.sh@49 -- # get_nvme_bdfs 00:29:06.009 22:55:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:06.009 22:55:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:06.009 22:55:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:06.009 22:55:50 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:06.009 22:55:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:06.009 22:55:50 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:06.009 22:55:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:29:06.009 22:55:50 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:29:06.009 22:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.009 22:55:50 -- common/autotest_common.sh@10 -- # set +x 00:29:06.271 Nvme0n1 00:29:06.271 22:55:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.271 22:55:50 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:06.271 22:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.271 22:55:50 -- common/autotest_common.sh@10 -- # set +x 00:29:06.532 22:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.532 22:55:51 -- host/fio.sh@51 -- # ls_guid=b44165ea-7882-4247-85d2-bd562da16be1 00:29:06.532 22:55:51 -- host/fio.sh@52 -- # get_lvs_free_mb b44165ea-7882-4247-85d2-bd562da16be1 00:29:06.532 22:55:51 -- common/autotest_common.sh@1343 -- # local lvs_uuid=b44165ea-7882-4247-85d2-bd562da16be1 00:29:06.532 22:55:51 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:06.532 22:55:51 -- common/autotest_common.sh@1345 -- # local fc 00:29:06.532 22:55:51 -- common/autotest_common.sh@1346 -- # local cs 00:29:06.532 22:55:51 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:29:06.532 22:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.532 22:55:51 -- common/autotest_common.sh@10 -- # set +x 00:29:06.532 22:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.532 22:55:51 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:06.532 { 00:29:06.532 "uuid": "b44165ea-7882-4247-85d2-bd562da16be1", 00:29:06.532 "name": "lvs_0", 00:29:06.532 "base_bdev": "Nvme0n1", 00:29:06.532 "total_data_clusters": 1787, 00:29:06.532 "free_clusters": 1787, 00:29:06.532 "block_size": 512, 00:29:06.532 "cluster_size": 1073741824 00:29:06.532 } 00:29:06.532 ]' 00:29:06.532 22:55:51 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="b44165ea-7882-4247-85d2-bd562da16be1") .free_clusters' 00:29:06.532 22:55:51 -- common/autotest_common.sh@1348 -- # fc=1787 00:29:06.532 22:55:51 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="b44165ea-7882-4247-85d2-bd562da16be1") .cluster_size' 00:29:06.793 22:55:51 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:29:06.793 22:55:51 -- common/autotest_common.sh@1352 -- # free_mb=1829888 00:29:06.793 22:55:51 -- common/autotest_common.sh@1353 -- # echo 1829888 00:29:06.793 1829888 00:29:06.793 22:55:51 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 1829888 00:29:06.793 22:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.793 22:55:51 -- common/autotest_common.sh@10 -- # set +x 00:29:06.793 454998e1-885f-47f7-adda-a0d7ddcbd2d7 00:29:06.793 22:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.793 22:55:51 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:06.793 22:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.793 22:55:51 -- common/autotest_common.sh@10 -- # set +x 00:29:06.793 22:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.793 22:55:51 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:06.793 22:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.793 22:55:51 -- common/autotest_common.sh@10 -- # set +x 00:29:06.793 22:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.793 22:55:51 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:06.793 22:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.793 22:55:51 -- common/autotest_common.sh@10 -- # set +x 00:29:06.793 22:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.793 22:55:51 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:06.793 22:55:51 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:06.793 22:55:51 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:06.793 22:55:51 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:06.793 22:55:51 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:06.793 22:55:51 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:06.793 22:55:51 -- common/autotest_common.sh@1320 -- # shift 00:29:06.793 22:55:51 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:06.793 22:55:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:06.793 22:55:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:06.793 22:55:51 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:06.793 22:55:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:06.793 22:55:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:06.793 22:55:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:06.793 22:55:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:06.793 22:55:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:06.793 22:55:51 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:06.793 22:55:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:06.793 22:55:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:06.793 22:55:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:06.793 22:55:51 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:06.793 22:55:51 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:07.053 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:07.053 fio-3.35 00:29:07.053 Starting 1 thread 00:29:07.053 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.603 00:29:09.603 test: (groupid=0, jobs=1): err= 0: pid=1285747: Mon Apr 15 22:55:54 2024 00:29:09.603 read: IOPS=11.0k, BW=42.9MiB/s (45.0MB/s)(86.1MiB/2005msec) 00:29:09.603 slat (usec): min=2, max=109, avg= 2.19, stdev= 1.00 00:29:09.603 clat (usec): min=2332, max=10430, avg=6437.01, stdev=497.55 00:29:09.603 lat (usec): min=2350, max=10433, avg=6439.20, stdev=497.50 00:29:09.603 clat percentiles (usec): 00:29:09.603 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5800], 20.00th=[ 6063], 00:29:09.603 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6521], 00:29:09.603 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:29:09.603 | 99.00th=[ 7570], 99.50th=[ 7701], 99.90th=[ 8356], 99.95th=[ 9110], 00:29:09.603 | 99.99th=[10028] 00:29:09.603 bw ( KiB/s): min=42784, max=44528, per=99.92%, avg=43914.00, stdev=785.67, samples=4 00:29:09.603 iops : min=10696, max=11132, avg=10978.00, stdev=196.35, samples=4 00:29:09.603 write: IOPS=11.0k, BW=42.8MiB/s (44.9MB/s)(85.8MiB/2005msec); 0 zone resets 00:29:09.603 slat (nsec): min=2112, max=97727, avg=2293.12, stdev=711.44 00:29:09.603 clat (usec): min=1009, max=9034, avg=5145.58, stdev=422.70 00:29:09.603 lat (usec): min=1016, max=9036, avg=5147.87, stdev=422.69 00:29:09.603 clat percentiles (usec): 00:29:09.603 | 1.00th=[ 4146], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4817], 00:29:09.603 | 30.00th=[ 4948], 40.00th=[ 5080], 50.00th=[ 5145], 60.00th=[ 5276], 00:29:09.603 | 70.00th=[ 5342], 80.00th=[ 5473], 90.00th=[ 5669], 95.00th=[ 5800], 00:29:09.603 | 99.00th=[ 6063], 99.50th=[ 6194], 99.90th=[ 7111], 99.95th=[ 8291], 00:29:09.603 | 99.99th=[ 8979] 00:29:09.603 bw ( KiB/s): min=43136, max=44480, per=100.00%, avg=43824.00, stdev=551.48, samples=4 00:29:09.603 iops : min=10784, max=11120, avg=10956.00, stdev=137.87, samples=4 00:29:09.603 lat (msec) : 2=0.02%, 4=0.18%, 10=99.79%, 20=0.01% 00:29:09.603 cpu : usr=65.77%, sys=30.74%, ctx=52, majf=0, minf=6 00:29:09.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:09.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:09.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:09.603 issued rwts: total=22030,21966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:09.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:09.603 00:29:09.603 Run status group 0 (all jobs): 00:29:09.603 READ: bw=42.9MiB/s (45.0MB/s), 42.9MiB/s-42.9MiB/s (45.0MB/s-45.0MB/s), io=86.1MiB (90.2MB), run=2005-2005msec 00:29:09.603 WRITE: bw=42.8MiB/s (44.9MB/s), 42.8MiB/s-42.8MiB/s (44.9MB/s-44.9MB/s), io=85.8MiB (90.0MB), run=2005-2005msec 00:29:09.603 22:55:54 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:09.603 22:55:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:09.603 22:55:54 -- common/autotest_common.sh@10 -- # set +x 00:29:09.603 22:55:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:09.603 22:55:54 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:09.603 22:55:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:09.603 22:55:54 -- common/autotest_common.sh@10 -- # set +x 00:29:10.175 22:55:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:10.175 22:55:54 -- host/fio.sh@62 -- # ls_nested_guid=57959082-8f8f-4b5a-9b3d-521cc6260e54 00:29:10.175 22:55:54 -- host/fio.sh@63 -- # get_lvs_free_mb 57959082-8f8f-4b5a-9b3d-521cc6260e54 00:29:10.175 22:55:54 -- common/autotest_common.sh@1343 -- # local lvs_uuid=57959082-8f8f-4b5a-9b3d-521cc6260e54 00:29:10.175 22:55:54 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:10.175 22:55:54 -- common/autotest_common.sh@1345 -- # local fc 00:29:10.175 22:55:54 -- common/autotest_common.sh@1346 -- # local cs 00:29:10.175 22:55:54 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:29:10.175 22:55:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:10.175 22:55:54 -- common/autotest_common.sh@10 -- # set +x 00:29:10.175 22:55:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:10.175 22:55:54 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:10.176 { 00:29:10.176 "uuid": "b44165ea-7882-4247-85d2-bd562da16be1", 00:29:10.176 "name": "lvs_0", 00:29:10.176 "base_bdev": "Nvme0n1", 00:29:10.176 "total_data_clusters": 1787, 00:29:10.176 "free_clusters": 0, 00:29:10.176 "block_size": 512, 00:29:10.176 "cluster_size": 1073741824 00:29:10.176 }, 00:29:10.176 { 00:29:10.176 "uuid": "57959082-8f8f-4b5a-9b3d-521cc6260e54", 00:29:10.176 "name": "lvs_n_0", 00:29:10.176 "base_bdev": "454998e1-885f-47f7-adda-a0d7ddcbd2d7", 00:29:10.176 "total_data_clusters": 457025, 00:29:10.176 "free_clusters": 457025, 00:29:10.176 "block_size": 512, 00:29:10.176 "cluster_size": 4194304 00:29:10.176 } 00:29:10.176 ]' 00:29:10.176 22:55:54 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="57959082-8f8f-4b5a-9b3d-521cc6260e54") .free_clusters' 00:29:10.176 22:55:54 -- common/autotest_common.sh@1348 -- # fc=457025 00:29:10.176 22:55:54 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="57959082-8f8f-4b5a-9b3d-521cc6260e54") .cluster_size' 00:29:10.176 22:55:54 -- common/autotest_common.sh@1349 -- # cs=4194304 00:29:10.176 22:55:54 -- common/autotest_common.sh@1352 -- # free_mb=1828100 00:29:10.176 22:55:54 -- common/autotest_common.sh@1353 -- # echo 1828100 00:29:10.176 1828100 00:29:10.176 22:55:54 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:29:10.176 22:55:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:10.176 22:55:54 -- common/autotest_common.sh@10 -- # set +x 00:29:11.120 b82b0030-bd31-4245-8a63-5c8b72e3f581 00:29:11.120 22:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:11.120 22:55:55 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:11.120 22:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:11.120 22:55:55 -- common/autotest_common.sh@10 -- # set +x 00:29:11.120 22:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:11.120 22:55:55 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:11.120 22:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:11.120 22:55:55 -- common/autotest_common.sh@10 -- # set +x 00:29:11.120 22:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:11.120 22:55:55 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:11.120 22:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:11.120 22:55:55 -- common/autotest_common.sh@10 -- # set +x 00:29:11.120 22:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:11.120 22:55:55 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:11.120 22:55:55 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:11.120 22:55:55 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:11.120 22:55:55 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:11.120 22:55:55 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:11.120 22:55:55 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:11.120 22:55:55 -- common/autotest_common.sh@1320 -- # shift 00:29:11.120 22:55:55 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:11.120 22:55:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:11.120 22:55:55 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:11.120 22:55:55 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:11.120 22:55:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:11.120 22:55:55 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:11.120 22:55:55 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:11.120 22:55:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:11.120 22:55:55 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:11.120 22:55:55 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:11.120 22:55:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:11.120 22:55:55 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:11.120 22:55:55 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:11.120 22:55:55 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:11.120 22:55:55 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:11.690 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:11.690 fio-3.35 00:29:11.690 Starting 1 thread 00:29:11.690 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.230 00:29:14.230 test: (groupid=0, jobs=1): err= 0: pid=1286884: Mon Apr 15 22:55:58 2024 00:29:14.230 read: IOPS=6742, BW=26.3MiB/s (27.6MB/s)(52.9MiB/2008msec) 00:29:14.230 slat (usec): min=2, max=108, avg= 2.25, stdev= 1.32 00:29:14.230 clat (usec): min=4625, max=16891, avg=10524.67, stdev=846.66 00:29:14.230 lat (usec): min=4640, max=16893, avg=10526.92, stdev=846.59 00:29:14.230 clat percentiles (usec): 00:29:14.230 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:29:14.230 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:29:14.230 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11469], 95.00th=[11863], 00:29:14.230 | 99.00th=[12387], 99.50th=[12649], 99.90th=[15664], 99.95th=[16712], 00:29:14.230 | 99.99th=[16909] 00:29:14.230 bw ( KiB/s): min=25928, max=27400, per=99.85%, avg=26928.00, stdev=691.83, samples=4 00:29:14.230 iops : min= 6482, max= 6850, avg=6732.00, stdev=172.96, samples=4 00:29:14.230 write: IOPS=6743, BW=26.3MiB/s (27.6MB/s)(52.9MiB/2008msec); 0 zone resets 00:29:14.230 slat (nsec): min=2152, max=100007, avg=2350.59, stdev=895.19 00:29:14.230 clat (usec): min=1684, max=15390, avg=8341.24, stdev=742.72 00:29:14.230 lat (usec): min=1692, max=15392, avg=8343.59, stdev=742.70 00:29:14.230 clat percentiles (usec): 00:29:14.230 | 1.00th=[ 6587], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 7767], 00:29:14.230 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8455], 00:29:14.230 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9241], 95.00th=[ 9372], 00:29:14.230 | 99.00th=[ 9896], 99.50th=[10159], 99.90th=[12911], 99.95th=[14484], 00:29:14.230 | 99.99th=[15401] 00:29:14.230 bw ( KiB/s): min=26816, max=27192, per=100.00%, avg=26976.00, stdev=156.90, samples=4 00:29:14.230 iops : min= 6704, max= 6798, avg=6744.00, stdev=39.23, samples=4 00:29:14.230 lat (msec) : 2=0.01%, 4=0.06%, 10=61.86%, 20=38.08% 00:29:14.230 cpu : usr=67.76%, sys=30.59%, ctx=109, majf=0, minf=6 00:29:14.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:14.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:14.230 issued rwts: total=13538,13541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:14.230 00:29:14.230 Run status group 0 (all jobs): 00:29:14.230 READ: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=52.9MiB (55.5MB), run=2008-2008msec 00:29:14.230 WRITE: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=52.9MiB (55.5MB), run=2008-2008msec 00:29:14.231 22:55:58 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:14.231 22:55:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.231 22:55:58 -- common/autotest_common.sh@10 -- # set +x 00:29:14.231 22:55:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.231 22:55:58 -- host/fio.sh@72 -- # sync 00:29:14.231 22:55:58 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:14.231 22:55:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.231 22:55:58 -- common/autotest_common.sh@10 -- # set +x 00:29:16.190 22:56:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:16.190 22:56:00 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:29:16.190 22:56:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:16.190 22:56:00 -- common/autotest_common.sh@10 -- # set +x 00:29:16.190 22:56:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:16.190 22:56:00 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:29:16.190 22:56:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:16.190 22:56:00 -- common/autotest_common.sh@10 -- # set +x 00:29:16.190 22:56:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:16.190 22:56:00 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:29:16.190 22:56:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:16.190 22:56:00 -- common/autotest_common.sh@10 -- # set +x 00:29:16.190 22:56:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:16.190 22:56:00 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:29:16.190 22:56:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:16.190 22:56:00 -- common/autotest_common.sh@10 -- # set +x 00:29:18.106 22:56:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:18.106 22:56:02 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:29:18.106 22:56:02 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:29:18.106 22:56:02 -- host/fio.sh@84 -- # nvmftestfini 00:29:18.106 22:56:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:18.106 22:56:02 -- nvmf/common.sh@116 -- # sync 00:29:18.106 22:56:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:18.106 22:56:02 -- nvmf/common.sh@119 -- # set +e 00:29:18.106 22:56:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:18.106 22:56:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:18.106 rmmod nvme_tcp 00:29:18.106 rmmod nvme_fabrics 00:29:18.106 rmmod nvme_keyring 00:29:18.106 22:56:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:18.106 22:56:02 -- nvmf/common.sh@123 -- # set -e 00:29:18.106 22:56:02 -- nvmf/common.sh@124 -- # return 0 00:29:18.106 22:56:02 -- nvmf/common.sh@477 -- # '[' -n 1283744 ']' 00:29:18.106 22:56:02 -- nvmf/common.sh@478 -- # killprocess 1283744 00:29:18.106 22:56:02 -- common/autotest_common.sh@926 -- # '[' -z 1283744 ']' 00:29:18.106 22:56:02 -- common/autotest_common.sh@930 -- # kill -0 1283744 00:29:18.106 22:56:02 -- common/autotest_common.sh@931 -- # uname 00:29:18.106 22:56:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:18.106 22:56:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1283744 00:29:18.106 22:56:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:18.106 22:56:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:18.106 22:56:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1283744' 00:29:18.106 killing process with pid 1283744 00:29:18.106 22:56:02 -- common/autotest_common.sh@945 -- # kill 1283744 00:29:18.106 22:56:02 -- common/autotest_common.sh@950 -- # wait 1283744 00:29:18.367 22:56:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:18.367 22:56:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:18.367 22:56:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:18.367 22:56:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:18.367 22:56:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:18.367 22:56:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.367 22:56:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:18.367 22:56:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.909 22:56:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:20.909 00:29:20.909 real 0m29.379s 00:29:20.909 user 2m19.352s 00:29:20.909 sys 0m9.955s 00:29:20.909 22:56:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:20.909 22:56:05 -- common/autotest_common.sh@10 -- # set +x 00:29:20.909 ************************************ 00:29:20.909 END TEST nvmf_fio_host 00:29:20.909 ************************************ 00:29:20.909 22:56:05 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:20.909 22:56:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:20.909 22:56:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:20.909 22:56:05 -- common/autotest_common.sh@10 -- # set +x 00:29:20.909 ************************************ 00:29:20.909 START TEST nvmf_failover 00:29:20.909 ************************************ 00:29:20.909 22:56:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:20.909 * Looking for test storage... 00:29:20.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:20.909 22:56:05 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.909 22:56:05 -- nvmf/common.sh@7 -- # uname -s 00:29:20.909 22:56:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.909 22:56:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.909 22:56:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.909 22:56:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.909 22:56:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.909 22:56:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.909 22:56:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.909 22:56:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.909 22:56:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.909 22:56:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.909 22:56:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:20.909 22:56:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:20.909 22:56:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.909 22:56:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.909 22:56:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.909 22:56:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.909 22:56:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.909 22:56:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.910 22:56:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.910 22:56:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.910 22:56:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.910 22:56:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.910 22:56:05 -- paths/export.sh@5 -- # export PATH 00:29:20.910 22:56:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.910 22:56:05 -- nvmf/common.sh@46 -- # : 0 00:29:20.910 22:56:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:20.910 22:56:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:20.910 22:56:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:20.910 22:56:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.910 22:56:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.910 22:56:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:20.910 22:56:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:20.910 22:56:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:20.910 22:56:05 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:20.910 22:56:05 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:20.910 22:56:05 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:20.910 22:56:05 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:20.910 22:56:05 -- host/failover.sh@18 -- # nvmftestinit 00:29:20.910 22:56:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:20.910 22:56:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.910 22:56:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:20.910 22:56:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:20.910 22:56:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:20.910 22:56:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.910 22:56:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:20.910 22:56:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.910 22:56:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:20.910 22:56:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:20.910 22:56:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:20.910 22:56:05 -- common/autotest_common.sh@10 -- # set +x 00:29:29.047 22:56:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:29.047 22:56:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:29.047 22:56:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:29.047 22:56:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:29.047 22:56:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:29.047 22:56:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:29.047 22:56:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:29.047 22:56:13 -- nvmf/common.sh@294 -- # net_devs=() 00:29:29.047 22:56:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:29.047 22:56:13 -- nvmf/common.sh@295 -- # e810=() 00:29:29.047 22:56:13 -- nvmf/common.sh@295 -- # local -ga e810 00:29:29.047 22:56:13 -- nvmf/common.sh@296 -- # x722=() 00:29:29.047 22:56:13 -- nvmf/common.sh@296 -- # local -ga x722 00:29:29.047 22:56:13 -- nvmf/common.sh@297 -- # mlx=() 00:29:29.047 22:56:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:29.047 22:56:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:29.048 22:56:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:29.048 22:56:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:29.048 22:56:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:29.048 22:56:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:29.048 22:56:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:29.048 22:56:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:29.048 22:56:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:29.048 22:56:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:29.048 22:56:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:29.048 22:56:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:29.048 22:56:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:29.048 22:56:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:29.048 22:56:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:29.048 22:56:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:29.048 22:56:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:29.048 22:56:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:29.048 22:56:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:29.048 22:56:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:29.048 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:29.048 22:56:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:29.048 22:56:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:29.048 22:56:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.048 22:56:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.048 22:56:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:29.048 22:56:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:29.048 22:56:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:29.048 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:29.048 22:56:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:29.048 22:56:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:29.048 22:56:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.048 22:56:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.048 22:56:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:29.048 22:56:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:29.048 22:56:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:29.048 22:56:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:29.048 22:56:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:29.048 22:56:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.048 22:56:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:29.048 22:56:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.048 22:56:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:29.048 Found net devices under 0000:31:00.0: cvl_0_0 00:29:29.048 22:56:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.048 22:56:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:29.048 22:56:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.048 22:56:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:29.048 22:56:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.048 22:56:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:29.048 Found net devices under 0000:31:00.1: cvl_0_1 00:29:29.048 22:56:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.048 22:56:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:29.048 22:56:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:29.048 22:56:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:29.048 22:56:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:29.048 22:56:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:29.048 22:56:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:29.048 22:56:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:29.048 22:56:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:29.048 22:56:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:29.048 22:56:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:29.048 22:56:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:29.048 22:56:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:29.048 22:56:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:29.048 22:56:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:29.048 22:56:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:29.048 22:56:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:29.048 22:56:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:29.048 22:56:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:29.048 22:56:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:29.048 22:56:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:29.048 22:56:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:29.048 22:56:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:29.048 22:56:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:29.048 22:56:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:29.048 22:56:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:29.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:29.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:29:29.048 00:29:29.048 --- 10.0.0.2 ping statistics --- 00:29:29.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.048 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:29:29.048 22:56:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:29.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:29.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:29:29.048 00:29:29.048 --- 10.0.0.1 ping statistics --- 00:29:29.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.048 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:29:29.048 22:56:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:29.048 22:56:13 -- nvmf/common.sh@410 -- # return 0 00:29:29.048 22:56:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:29.048 22:56:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:29.048 22:56:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:29.048 22:56:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:29.048 22:56:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:29.048 22:56:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:29.048 22:56:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:29.048 22:56:13 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:29.048 22:56:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:29.048 22:56:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:29.048 22:56:13 -- common/autotest_common.sh@10 -- # set +x 00:29:29.048 22:56:13 -- nvmf/common.sh@469 -- # nvmfpid=1292635 00:29:29.048 22:56:13 -- nvmf/common.sh@470 -- # waitforlisten 1292635 00:29:29.048 22:56:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:29.048 22:56:13 -- common/autotest_common.sh@819 -- # '[' -z 1292635 ']' 00:29:29.048 22:56:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.048 22:56:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:29.048 22:56:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.048 22:56:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:29.048 22:56:13 -- common/autotest_common.sh@10 -- # set +x 00:29:29.048 [2024-04-15 22:56:13.525458] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:29:29.048 [2024-04-15 22:56:13.525525] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.048 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.048 [2024-04-15 22:56:13.604972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:29.048 [2024-04-15 22:56:13.676843] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:29.048 [2024-04-15 22:56:13.676968] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:29.048 [2024-04-15 22:56:13.676976] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:29.048 [2024-04-15 22:56:13.676983] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:29.048 [2024-04-15 22:56:13.677118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:29.049 [2024-04-15 22:56:13.677255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.049 [2024-04-15 22:56:13.677255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:29.620 22:56:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:29.620 22:56:14 -- common/autotest_common.sh@852 -- # return 0 00:29:29.620 22:56:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:29.620 22:56:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:29.620 22:56:14 -- common/autotest_common.sh@10 -- # set +x 00:29:29.620 22:56:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:29.620 22:56:14 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:29.879 [2024-04-15 22:56:14.481418] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:29.879 22:56:14 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:29.879 Malloc0 00:29:30.138 22:56:14 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:30.138 22:56:14 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:30.398 22:56:15 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:30.398 [2024-04-15 22:56:15.131008] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.398 22:56:15 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:30.658 [2024-04-15 22:56:15.279434] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:30.658 22:56:15 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:30.658 [2024-04-15 22:56:15.427911] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:30.658 22:56:15 -- host/failover.sh@31 -- # bdevperf_pid=1293051 00:29:30.659 22:56:15 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:30.659 22:56:15 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:30.659 22:56:15 -- host/failover.sh@34 -- # waitforlisten 1293051 /var/tmp/bdevperf.sock 00:29:30.659 22:56:15 -- common/autotest_common.sh@819 -- # '[' -z 1293051 ']' 00:29:30.659 22:56:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:30.659 22:56:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:30.659 22:56:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:30.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:30.659 22:56:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:30.659 22:56:15 -- common/autotest_common.sh@10 -- # set +x 00:29:31.598 22:56:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:31.598 22:56:16 -- common/autotest_common.sh@852 -- # return 0 00:29:31.598 22:56:16 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:31.858 NVMe0n1 00:29:31.858 22:56:16 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:32.430 00:29:32.430 22:56:16 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:32.430 22:56:16 -- host/failover.sh@39 -- # run_test_pid=1293349 00:29:32.430 22:56:16 -- host/failover.sh@41 -- # sleep 1 00:29:33.374 22:56:17 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:33.374 [2024-04-15 22:56:18.105996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.374 [2024-04-15 22:56:18.106218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 [2024-04-15 22:56:18.106312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0ae0 is same with the state(5) to be set 00:29:33.375 22:56:18 -- host/failover.sh@45 -- # sleep 3 00:29:36.678 22:56:21 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:36.678 00:29:36.678 22:56:21 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:36.940 [2024-04-15 22:56:21.512787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.512999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.513006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.513015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.513021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.513028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.940 [2024-04-15 22:56:21.513034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 [2024-04-15 22:56:21.513172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:29:36.941 22:56:21 -- host/failover.sh@50 -- # sleep 3 00:29:40.245 22:56:24 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:40.245 [2024-04-15 22:56:24.684597] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:40.245 22:56:24 -- host/failover.sh@55 -- # sleep 1 00:29:41.189 22:56:25 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:41.189 [2024-04-15 22:56:25.855639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.189 [2024-04-15 22:56:25.855678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.189 [2024-04-15 22:56:25.855686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.189 [2024-04-15 22:56:25.855692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.189 [2024-04-15 22:56:25.855699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.189 [2024-04-15 22:56:25.855705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.189 [2024-04-15 22:56:25.855712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.189 [2024-04-15 22:56:25.855719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.189 [2024-04-15 22:56:25.855725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.189 [2024-04-15 22:56:25.855731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.189 [2024-04-15 22:56:25.855738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.189 [2024-04-15 22:56:25.855744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.189 [2024-04-15 22:56:25.855750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.189 [2024-04-15 22:56:25.855757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.189 [2024-04-15 22:56:25.855763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.189 [2024-04-15 22:56:25.855769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.189 [2024-04-15 22:56:25.855775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.855993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.856000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.856006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.856013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.856020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.856028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 [2024-04-15 22:56:25.856035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f2660 is same with the state(5) to be set 00:29:41.190 22:56:25 -- host/failover.sh@59 -- # wait 1293349 00:29:47.857 0 00:29:47.857 22:56:32 -- host/failover.sh@61 -- # killprocess 1293051 00:29:47.857 22:56:32 -- common/autotest_common.sh@926 -- # '[' -z 1293051 ']' 00:29:47.857 22:56:32 -- common/autotest_common.sh@930 -- # kill -0 1293051 00:29:47.857 22:56:32 -- common/autotest_common.sh@931 -- # uname 00:29:47.857 22:56:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:47.857 22:56:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1293051 00:29:47.857 22:56:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:47.857 22:56:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:47.857 22:56:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1293051' 00:29:47.857 killing process with pid 1293051 00:29:47.857 22:56:32 -- common/autotest_common.sh@945 -- # kill 1293051 00:29:47.857 22:56:32 -- common/autotest_common.sh@950 -- # wait 1293051 00:29:47.857 22:56:32 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:47.857 [2024-04-15 22:56:15.502147] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:29:47.857 [2024-04-15 22:56:15.502203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293051 ] 00:29:47.857 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.857 [2024-04-15 22:56:15.567845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.857 [2024-04-15 22:56:15.630503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.857 Running I/O for 15 seconds... 00:29:47.857 [2024-04-15 22:56:18.106637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.106673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.106692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.106700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.106710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.106718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.106727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.106735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.106745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.106752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.106762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.106769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.106778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.106785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.106794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.106802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.106812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.106819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.106828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.106835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.106844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.106851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.106866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.106873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.106883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.106890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.106899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.106906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.106915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.106923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.106932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.106939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.106949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.106955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.106967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.106974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.106984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.106991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.107000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.107007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.107017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.107024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.107033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.107040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.107050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.107057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.107066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.857 [2024-04-15 22:56:18.107075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.857 [2024-04-15 22:56:18.107085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.858 [2024-04-15 22:56:18.107579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.858 [2024-04-15 22:56:18.107595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.858 [2024-04-15 22:56:18.107611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.858 [2024-04-15 22:56:18.107644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.858 [2024-04-15 22:56:18.107653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.858 [2024-04-15 22:56:18.107660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.107677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.859 [2024-04-15 22:56:18.107692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.107711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.107727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.107743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.107761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.107777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.107794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.107810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.107827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.107844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.107860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.107877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.107893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.107910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.107932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.107949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.107966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.859 [2024-04-15 22:56:18.107982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.107991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.107998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.108007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.108014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.108023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:42160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.859 [2024-04-15 22:56:18.108030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.108040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.108047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.108056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.859 [2024-04-15 22:56:18.108063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.108072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.108079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.108088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.859 [2024-04-15 22:56:18.108096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.108107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.108114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.108125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.108133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.108143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.108150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.108159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.108166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.108175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.108182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.108192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.108199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.108208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.108215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.108224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.108231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.108241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.108249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.108258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.108265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.108274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.108280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.859 [2024-04-15 22:56:18.108290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.859 [2024-04-15 22:56:18.108298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.860 [2024-04-15 22:56:18.108314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.860 [2024-04-15 22:56:18.108330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.860 [2024-04-15 22:56:18.108349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.860 [2024-04-15 22:56:18.108414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.860 [2024-04-15 22:56:18.108447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:41800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.860 [2024-04-15 22:56:18.108618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.860 [2024-04-15 22:56:18.108634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.860 [2024-04-15 22:56:18.108650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.860 [2024-04-15 22:56:18.108682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:18.108801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134a930 is same with the state(5) to be set 00:29:47.860 [2024-04-15 22:56:18.108818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:47.860 [2024-04-15 22:56:18.108824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:47.860 [2024-04-15 22:56:18.108830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41936 len:8 PRP1 0x0 PRP2 0x0 00:29:47.860 [2024-04-15 22:56:18.108838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108875] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x134a930 was disconnected and freed. reset controller. 00:29:47.860 [2024-04-15 22:56:18.108891] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:47.860 [2024-04-15 22:56:18.108912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:47.860 [2024-04-15 22:56:18.108920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:47.860 [2024-04-15 22:56:18.108936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:47.860 [2024-04-15 22:56:18.108951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:47.860 [2024-04-15 22:56:18.108966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.860 [2024-04-15 22:56:18.108973] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.860 [2024-04-15 22:56:18.111248] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.860 [2024-04-15 22:56:18.111270] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x132bbd0 (9): Bad file descriptor 00:29:47.860 [2024-04-15 22:56:18.144007] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:47.860 [2024-04-15 22:56:21.513434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.860 [2024-04-15 22:56:21.513472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.513990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.513997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.861 [2024-04-15 22:56:21.514006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.861 [2024-04-15 22:56:21.514013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.862 [2024-04-15 22:56:21.514081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.862 [2024-04-15 22:56:21.514097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.862 [2024-04-15 22:56:21.514113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.862 [2024-04-15 22:56:21.514130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.862 [2024-04-15 22:56:21.514165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.862 [2024-04-15 22:56:21.514214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:37056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.862 [2024-04-15 22:56:21.514263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.862 [2024-04-15 22:56:21.514447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:37120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.862 [2024-04-15 22:56:21.514513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.862 [2024-04-15 22:56:21.514547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.862 [2024-04-15 22:56:21.514659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.862 [2024-04-15 22:56:21.514666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.514682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.514699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.514716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.514732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.863 [2024-04-15 22:56:21.514749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.514766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:37192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.514789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.863 [2024-04-15 22:56:21.514806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.863 [2024-04-15 22:56:21.514823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.514839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.514855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.514872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.514888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.514905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.514921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.514938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.514955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.863 [2024-04-15 22:56:21.514971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.514989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.514998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.515005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.515021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.515038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.515056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.863 [2024-04-15 22:56:21.515071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.863 [2024-04-15 22:56:21.515088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.863 [2024-04-15 22:56:21.515105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.863 [2024-04-15 22:56:21.515121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.863 [2024-04-15 22:56:21.515138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.515154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.515170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.515188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.515206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.515222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.515239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.515255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.515272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.515289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.515304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.863 [2024-04-15 22:56:21.515321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.863 [2024-04-15 22:56:21.515330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.863 [2024-04-15 22:56:21.515337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:21.515353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:21.515370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:21.515386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.864 [2024-04-15 22:56:21.515402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:21.515420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.864 [2024-04-15 22:56:21.515436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.864 [2024-04-15 22:56:21.515452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:21.515469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:21.515485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:21.515501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:21.515518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:21.515535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:21.515554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:21.515570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:21.515587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:21.515603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1338090 is same with the state(5) to be set 00:29:47.864 [2024-04-15 22:56:21.515622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:47.864 [2024-04-15 22:56:21.515628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:47.864 [2024-04-15 22:56:21.515635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36960 len:8 PRP1 0x0 PRP2 0x0 00:29:47.864 [2024-04-15 22:56:21.515642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515681] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1338090 was disconnected and freed. reset controller. 00:29:47.864 [2024-04-15 22:56:21.515691] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:47.864 [2024-04-15 22:56:21.515710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:47.864 [2024-04-15 22:56:21.515719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:47.864 [2024-04-15 22:56:21.515735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:47.864 [2024-04-15 22:56:21.515750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:47.864 [2024-04-15 22:56:21.515766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:21.515774] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.864 [2024-04-15 22:56:21.518062] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.864 [2024-04-15 22:56:21.518086] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x132bbd0 (9): Bad file descriptor 00:29:47.864 [2024-04-15 22:56:21.547528] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:47.864 [2024-04-15 22:56:25.856309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:25.856347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:25.856365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:25.856373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:25.856383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:25.856390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:25.856400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:25.856407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:25.856417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:25.856429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:25.856439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:25.856446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:25.856456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:25.856463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:25.856472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:25.856479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:25.856489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:25.856496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:25.856506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:25.856513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:25.856523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:25.856530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:25.856539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:25.856554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:25.856563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:25.856570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:25.856580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:25.856586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:25.856596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:25.856603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.864 [2024-04-15 22:56:25.856612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.864 [2024-04-15 22:56:25.856619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.856988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.856995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.857004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.865 [2024-04-15 22:56:25.857012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.857021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.865 [2024-04-15 22:56:25.857028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.857037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.857044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.857053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.865 [2024-04-15 22:56:25.857061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.857070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.857079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.857088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.865 [2024-04-15 22:56:25.857095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.857105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.865 [2024-04-15 22:56:25.857112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.857121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.865 [2024-04-15 22:56:25.857128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.857137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.865 [2024-04-15 22:56:25.857144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.857153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.865 [2024-04-15 22:56:25.857160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.857170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.857178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.857187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.857193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.857204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.857212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.857221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.857228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.857238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.857244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.857254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.857261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.857271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.857278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.865 [2024-04-15 22:56:25.857289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.865 [2024-04-15 22:56:25.857296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.866 [2024-04-15 22:56:25.857345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.866 [2024-04-15 22:56:25.857426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.866 [2024-04-15 22:56:25.857476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.866 [2024-04-15 22:56:25.857493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.866 [2024-04-15 22:56:25.857510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.866 [2024-04-15 22:56:25.857526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.866 [2024-04-15 22:56:25.857563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.866 [2024-04-15 22:56:25.857596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.866 [2024-04-15 22:56:25.857630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.866 [2024-04-15 22:56:25.857795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.866 [2024-04-15 22:56:25.857811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.866 [2024-04-15 22:56:25.857843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.866 [2024-04-15 22:56:25.857852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.866 [2024-04-15 22:56:25.857859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.857869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.867 [2024-04-15 22:56:25.857875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.857885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.867 [2024-04-15 22:56:25.857892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.857901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.867 [2024-04-15 22:56:25.857908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.857918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.857925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.857935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.867 [2024-04-15 22:56:25.857942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.857951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.867 [2024-04-15 22:56:25.857958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.857967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.857974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.857983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.867 [2024-04-15 22:56:25.857991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.867 [2024-04-15 22:56:25.858007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.867 [2024-04-15 22:56:25.858023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.867 [2024-04-15 22:56:25.858040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.867 [2024-04-15 22:56:25.858221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.867 [2024-04-15 22:56:25.858303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.867 [2024-04-15 22:56:25.858337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.867 [2024-04-15 22:56:25.858465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e750 is same with the state(5) to be set 00:29:47.867 [2024-04-15 22:56:25.858483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:47.867 [2024-04-15 22:56:25.858489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:47.867 [2024-04-15 22:56:25.858496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6520 len:8 PRP1 0x0 PRP2 0x0 00:29:47.867 [2024-04-15 22:56:25.858503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.867 [2024-04-15 22:56:25.858541] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x134e750 was disconnected and freed. reset controller. 00:29:47.867 [2024-04-15 22:56:25.858554] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:47.867 [2024-04-15 22:56:25.858575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:47.868 [2024-04-15 22:56:25.858584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.868 [2024-04-15 22:56:25.858594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:47.868 [2024-04-15 22:56:25.858602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.868 [2024-04-15 22:56:25.858610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:47.868 [2024-04-15 22:56:25.858617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.868 [2024-04-15 22:56:25.858625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:47.868 [2024-04-15 22:56:25.858632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.868 [2024-04-15 22:56:25.858639] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.868 [2024-04-15 22:56:25.861030] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.868 [2024-04-15 22:56:25.861055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x132bbd0 (9): Bad file descriptor 00:29:47.868 [2024-04-15 22:56:25.888600] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:47.868 00:29:47.868 Latency(us) 00:29:47.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.868 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:47.868 Verification LBA range: start 0x0 length 0x4000 00:29:47.868 NVMe0n1 : 15.00 16816.60 65.69 318.64 0.00 7453.73 723.63 15837.87 00:29:47.868 =================================================================================================================== 00:29:47.868 Total : 16816.60 65.69 318.64 0.00 7453.73 723.63 15837.87 00:29:47.868 Received shutdown signal, test time was about 15.000000 seconds 00:29:47.868 00:29:47.868 Latency(us) 00:29:47.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.868 =================================================================================================================== 00:29:47.868 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:47.868 22:56:32 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:47.868 22:56:32 -- host/failover.sh@65 -- # count=3 00:29:47.868 22:56:32 -- host/failover.sh@67 -- # (( count != 3 )) 00:29:47.868 22:56:32 -- host/failover.sh@73 -- # bdevperf_pid=1296398 00:29:47.868 22:56:32 -- host/failover.sh@75 -- # waitforlisten 1296398 /var/tmp/bdevperf.sock 00:29:47.868 22:56:32 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:47.868 22:56:32 -- common/autotest_common.sh@819 -- # '[' -z 1296398 ']' 00:29:47.868 22:56:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:47.868 22:56:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:47.868 22:56:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:47.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:47.868 22:56:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:47.868 22:56:32 -- common/autotest_common.sh@10 -- # set +x 00:29:48.439 22:56:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:48.439 22:56:33 -- common/autotest_common.sh@852 -- # return 0 00:29:48.439 22:56:33 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:48.439 [2024-04-15 22:56:33.230823] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:48.700 22:56:33 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:48.700 [2024-04-15 22:56:33.399274] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:48.700 22:56:33 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:48.959 NVMe0n1 00:29:48.959 22:56:33 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:49.530 00:29:49.530 22:56:34 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:49.791 00:29:49.791 22:56:34 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:49.791 22:56:34 -- host/failover.sh@82 -- # grep -q NVMe0 00:29:50.052 22:56:34 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:50.313 22:56:34 -- host/failover.sh@87 -- # sleep 3 00:29:53.617 22:56:37 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:53.617 22:56:37 -- host/failover.sh@88 -- # grep -q NVMe0 00:29:53.617 22:56:38 -- host/failover.sh@90 -- # run_test_pid=1297451 00:29:53.617 22:56:38 -- host/failover.sh@92 -- # wait 1297451 00:29:53.617 22:56:38 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:54.560 0 00:29:54.560 22:56:39 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:54.560 [2024-04-15 22:56:32.321333] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:29:54.560 [2024-04-15 22:56:32.321393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296398 ] 00:29:54.560 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.560 [2024-04-15 22:56:32.386741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.560 [2024-04-15 22:56:32.448945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.560 [2024-04-15 22:56:34.851436] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:54.560 [2024-04-15 22:56:34.851483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.560 [2024-04-15 22:56:34.851495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.560 [2024-04-15 22:56:34.851505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.560 [2024-04-15 22:56:34.851512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.560 [2024-04-15 22:56:34.851520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.560 [2024-04-15 22:56:34.851527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.560 [2024-04-15 22:56:34.851535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.560 [2024-04-15 22:56:34.851546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.560 [2024-04-15 22:56:34.851553] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.560 [2024-04-15 22:56:34.851578] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.560 [2024-04-15 22:56:34.851592] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec2bd0 (9): Bad file descriptor 00:29:54.560 [2024-04-15 22:56:34.943761] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:54.560 Running I/O for 1 seconds... 00:29:54.560 00:29:54.560 Latency(us) 00:29:54.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.560 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:54.560 Verification LBA range: start 0x0 length 0x4000 00:29:54.560 NVMe0n1 : 1.00 19973.53 78.02 0.00 0.00 6378.17 907.95 14308.69 00:29:54.560 =================================================================================================================== 00:29:54.560 Total : 19973.53 78.02 0.00 0.00 6378.17 907.95 14308.69 00:29:54.560 22:56:39 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:54.560 22:56:39 -- host/failover.sh@95 -- # grep -q NVMe0 00:29:54.560 22:56:39 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:54.821 22:56:39 -- host/failover.sh@99 -- # grep -q NVMe0 00:29:54.821 22:56:39 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:55.082 22:56:39 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:55.082 22:56:39 -- host/failover.sh@101 -- # sleep 3 00:29:58.382 22:56:42 -- host/failover.sh@103 -- # grep -q NVMe0 00:29:58.382 22:56:42 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:58.382 22:56:43 -- host/failover.sh@108 -- # killprocess 1296398 00:29:58.382 22:56:43 -- common/autotest_common.sh@926 -- # '[' -z 1296398 ']' 00:29:58.382 22:56:43 -- common/autotest_common.sh@930 -- # kill -0 1296398 00:29:58.382 22:56:43 -- common/autotest_common.sh@931 -- # uname 00:29:58.382 22:56:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:58.382 22:56:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1296398 00:29:58.382 22:56:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:58.382 22:56:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:58.382 22:56:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1296398' 00:29:58.382 killing process with pid 1296398 00:29:58.382 22:56:43 -- common/autotest_common.sh@945 -- # kill 1296398 00:29:58.382 22:56:43 -- common/autotest_common.sh@950 -- # wait 1296398 00:29:58.642 22:56:43 -- host/failover.sh@110 -- # sync 00:29:58.642 22:56:43 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:58.642 22:56:43 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:58.642 22:56:43 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:58.642 22:56:43 -- host/failover.sh@116 -- # nvmftestfini 00:29:58.642 22:56:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:58.642 22:56:43 -- nvmf/common.sh@116 -- # sync 00:29:58.642 22:56:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:58.642 22:56:43 -- nvmf/common.sh@119 -- # set +e 00:29:58.642 22:56:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:58.642 22:56:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:58.642 rmmod nvme_tcp 00:29:58.642 rmmod nvme_fabrics 00:29:58.642 rmmod nvme_keyring 00:29:58.642 22:56:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:58.642 22:56:43 -- nvmf/common.sh@123 -- # set -e 00:29:58.642 22:56:43 -- nvmf/common.sh@124 -- # return 0 00:29:58.642 22:56:43 -- nvmf/common.sh@477 -- # '[' -n 1292635 ']' 00:29:58.642 22:56:43 -- nvmf/common.sh@478 -- # killprocess 1292635 00:29:58.642 22:56:43 -- common/autotest_common.sh@926 -- # '[' -z 1292635 ']' 00:29:58.642 22:56:43 -- common/autotest_common.sh@930 -- # kill -0 1292635 00:29:58.642 22:56:43 -- common/autotest_common.sh@931 -- # uname 00:29:58.903 22:56:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:58.903 22:56:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1292635 00:29:58.903 22:56:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:58.903 22:56:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:58.903 22:56:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1292635' 00:29:58.903 killing process with pid 1292635 00:29:58.903 22:56:43 -- common/autotest_common.sh@945 -- # kill 1292635 00:29:58.903 22:56:43 -- common/autotest_common.sh@950 -- # wait 1292635 00:29:58.903 22:56:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:58.903 22:56:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:58.903 22:56:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:58.903 22:56:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:58.903 22:56:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:58.903 22:56:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.903 22:56:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:58.903 22:56:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.446 22:56:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:01.446 00:30:01.446 real 0m40.569s 00:30:01.446 user 2m2.706s 00:30:01.446 sys 0m8.706s 00:30:01.446 22:56:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:01.446 22:56:45 -- common/autotest_common.sh@10 -- # set +x 00:30:01.446 ************************************ 00:30:01.446 END TEST nvmf_failover 00:30:01.446 ************************************ 00:30:01.446 22:56:45 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:01.446 22:56:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:01.446 22:56:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:01.446 22:56:45 -- common/autotest_common.sh@10 -- # set +x 00:30:01.446 ************************************ 00:30:01.446 START TEST nvmf_discovery 00:30:01.446 ************************************ 00:30:01.446 22:56:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:01.446 * Looking for test storage... 00:30:01.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:01.446 22:56:45 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.446 22:56:45 -- nvmf/common.sh@7 -- # uname -s 00:30:01.446 22:56:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.446 22:56:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.446 22:56:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.446 22:56:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.447 22:56:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.447 22:56:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.447 22:56:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.447 22:56:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.447 22:56:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.447 22:56:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.447 22:56:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:01.447 22:56:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:01.447 22:56:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.447 22:56:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.447 22:56:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.447 22:56:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.447 22:56:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.447 22:56:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.447 22:56:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.447 22:56:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.447 22:56:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.447 22:56:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.447 22:56:45 -- paths/export.sh@5 -- # export PATH 00:30:01.447 22:56:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.447 22:56:45 -- nvmf/common.sh@46 -- # : 0 00:30:01.447 22:56:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:01.447 22:56:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:01.447 22:56:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:01.447 22:56:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.447 22:56:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.447 22:56:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:01.447 22:56:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:01.447 22:56:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:01.447 22:56:45 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:01.447 22:56:45 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:01.447 22:56:45 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:01.447 22:56:45 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:01.447 22:56:45 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:01.447 22:56:45 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:01.447 22:56:45 -- host/discovery.sh@25 -- # nvmftestinit 00:30:01.447 22:56:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:01.447 22:56:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.447 22:56:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:01.447 22:56:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:01.447 22:56:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:01.447 22:56:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.447 22:56:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:01.447 22:56:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.447 22:56:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:01.447 22:56:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:01.447 22:56:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:01.447 22:56:45 -- common/autotest_common.sh@10 -- # set +x 00:30:09.592 22:56:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:09.592 22:56:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:09.592 22:56:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:09.592 22:56:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:09.592 22:56:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:09.592 22:56:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:09.592 22:56:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:09.592 22:56:53 -- nvmf/common.sh@294 -- # net_devs=() 00:30:09.592 22:56:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:09.592 22:56:53 -- nvmf/common.sh@295 -- # e810=() 00:30:09.592 22:56:53 -- nvmf/common.sh@295 -- # local -ga e810 00:30:09.592 22:56:53 -- nvmf/common.sh@296 -- # x722=() 00:30:09.592 22:56:53 -- nvmf/common.sh@296 -- # local -ga x722 00:30:09.592 22:56:53 -- nvmf/common.sh@297 -- # mlx=() 00:30:09.592 22:56:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:09.592 22:56:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.592 22:56:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.592 22:56:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.592 22:56:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.592 22:56:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.592 22:56:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.592 22:56:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.592 22:56:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.592 22:56:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.592 22:56:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.592 22:56:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.592 22:56:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:09.592 22:56:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:09.592 22:56:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:09.592 22:56:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:09.592 22:56:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:09.592 22:56:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:09.592 22:56:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:09.592 22:56:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:09.592 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:09.592 22:56:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:09.592 22:56:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:09.592 22:56:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.592 22:56:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.592 22:56:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:09.592 22:56:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:09.592 22:56:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:09.592 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:09.592 22:56:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:09.592 22:56:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:09.592 22:56:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.592 22:56:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.592 22:56:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:09.592 22:56:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:09.592 22:56:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:09.592 22:56:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:09.592 22:56:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:09.592 22:56:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.592 22:56:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:09.592 22:56:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.592 22:56:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:09.592 Found net devices under 0000:31:00.0: cvl_0_0 00:30:09.592 22:56:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.592 22:56:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:09.592 22:56:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.592 22:56:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:09.592 22:56:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.592 22:56:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:09.592 Found net devices under 0000:31:00.1: cvl_0_1 00:30:09.592 22:56:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.592 22:56:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:09.592 22:56:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:09.592 22:56:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:09.592 22:56:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:09.592 22:56:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:09.592 22:56:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:09.592 22:56:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:09.592 22:56:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:09.592 22:56:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:09.592 22:56:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:09.592 22:56:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:09.592 22:56:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:09.592 22:56:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:09.592 22:56:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:09.592 22:56:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:09.592 22:56:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:09.593 22:56:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:09.593 22:56:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:09.593 22:56:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:09.593 22:56:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:09.593 22:56:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:09.593 22:56:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:09.593 22:56:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:09.593 22:56:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:09.593 22:56:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:09.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:09.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:30:09.593 00:30:09.593 --- 10.0.0.2 ping statistics --- 00:30:09.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.593 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:30:09.593 22:56:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:09.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:09.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:30:09.593 00:30:09.593 --- 10.0.0.1 ping statistics --- 00:30:09.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.593 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:30:09.593 22:56:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:09.593 22:56:53 -- nvmf/common.sh@410 -- # return 0 00:30:09.593 22:56:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:09.593 22:56:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:09.593 22:56:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:09.593 22:56:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:09.593 22:56:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:09.593 22:56:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:09.593 22:56:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:09.593 22:56:53 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:09.593 22:56:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:09.593 22:56:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:09.593 22:56:53 -- common/autotest_common.sh@10 -- # set +x 00:30:09.593 22:56:53 -- nvmf/common.sh@469 -- # nvmfpid=1303145 00:30:09.593 22:56:53 -- nvmf/common.sh@470 -- # waitforlisten 1303145 00:30:09.593 22:56:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:09.593 22:56:53 -- common/autotest_common.sh@819 -- # '[' -z 1303145 ']' 00:30:09.593 22:56:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.593 22:56:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:09.593 22:56:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.593 22:56:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:09.593 22:56:53 -- common/autotest_common.sh@10 -- # set +x 00:30:09.593 [2024-04-15 22:56:53.905466] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:30:09.593 [2024-04-15 22:56:53.905517] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.593 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.593 [2024-04-15 22:56:53.977168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.593 [2024-04-15 22:56:54.040173] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:09.593 [2024-04-15 22:56:54.040291] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.593 [2024-04-15 22:56:54.040299] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.593 [2024-04-15 22:56:54.040307] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.593 [2024-04-15 22:56:54.040323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.165 22:56:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:10.165 22:56:54 -- common/autotest_common.sh@852 -- # return 0 00:30:10.165 22:56:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:10.165 22:56:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:10.165 22:56:54 -- common/autotest_common.sh@10 -- # set +x 00:30:10.165 22:56:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:10.165 22:56:54 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:10.165 22:56:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.165 22:56:54 -- common/autotest_common.sh@10 -- # set +x 00:30:10.165 [2024-04-15 22:56:54.718827] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.165 22:56:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.165 22:56:54 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:10.165 22:56:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.165 22:56:54 -- common/autotest_common.sh@10 -- # set +x 00:30:10.165 [2024-04-15 22:56:54.730959] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:10.165 22:56:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.165 22:56:54 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:10.165 22:56:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.165 22:56:54 -- common/autotest_common.sh@10 -- # set +x 00:30:10.165 null0 00:30:10.165 22:56:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.165 22:56:54 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:10.165 22:56:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.165 22:56:54 -- common/autotest_common.sh@10 -- # set +x 00:30:10.165 null1 00:30:10.165 22:56:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.165 22:56:54 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:10.165 22:56:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.165 22:56:54 -- common/autotest_common.sh@10 -- # set +x 00:30:10.165 22:56:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.165 22:56:54 -- host/discovery.sh@45 -- # hostpid=1303459 00:30:10.165 22:56:54 -- host/discovery.sh@46 -- # waitforlisten 1303459 /tmp/host.sock 00:30:10.165 22:56:54 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:10.165 22:56:54 -- common/autotest_common.sh@819 -- # '[' -z 1303459 ']' 00:30:10.165 22:56:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:30:10.165 22:56:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:10.165 22:56:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:10.165 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:10.165 22:56:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:10.165 22:56:54 -- common/autotest_common.sh@10 -- # set +x 00:30:10.165 [2024-04-15 22:56:54.814246] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:30:10.165 [2024-04-15 22:56:54.814294] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303459 ] 00:30:10.165 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.165 [2024-04-15 22:56:54.878632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.165 [2024-04-15 22:56:54.941060] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:10.165 [2024-04-15 22:56:54.941188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.106 22:56:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:11.106 22:56:55 -- common/autotest_common.sh@852 -- # return 0 00:30:11.106 22:56:55 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:11.106 22:56:55 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:11.106 22:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.106 22:56:55 -- common/autotest_common.sh@10 -- # set +x 00:30:11.106 22:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.106 22:56:55 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:11.106 22:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.106 22:56:55 -- common/autotest_common.sh@10 -- # set +x 00:30:11.106 22:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.106 22:56:55 -- host/discovery.sh@72 -- # notify_id=0 00:30:11.106 22:56:55 -- host/discovery.sh@78 -- # get_subsystem_names 00:30:11.106 22:56:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:11.106 22:56:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:11.106 22:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.106 22:56:55 -- host/discovery.sh@59 -- # sort 00:30:11.106 22:56:55 -- common/autotest_common.sh@10 -- # set +x 00:30:11.106 22:56:55 -- host/discovery.sh@59 -- # xargs 00:30:11.106 22:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.106 22:56:55 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:30:11.106 22:56:55 -- host/discovery.sh@79 -- # get_bdev_list 00:30:11.106 22:56:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:11.106 22:56:55 -- host/discovery.sh@55 -- # xargs 00:30:11.106 22:56:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:11.106 22:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.106 22:56:55 -- host/discovery.sh@55 -- # sort 00:30:11.106 22:56:55 -- common/autotest_common.sh@10 -- # set +x 00:30:11.106 22:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.106 22:56:55 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:30:11.106 22:56:55 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:11.106 22:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.106 22:56:55 -- common/autotest_common.sh@10 -- # set +x 00:30:11.106 22:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.106 22:56:55 -- host/discovery.sh@82 -- # get_subsystem_names 00:30:11.106 22:56:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:11.106 22:56:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:11.107 22:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.107 22:56:55 -- common/autotest_common.sh@10 -- # set +x 00:30:11.107 22:56:55 -- host/discovery.sh@59 -- # sort 00:30:11.107 22:56:55 -- host/discovery.sh@59 -- # xargs 00:30:11.107 22:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.107 22:56:55 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:30:11.107 22:56:55 -- host/discovery.sh@83 -- # get_bdev_list 00:30:11.107 22:56:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:11.107 22:56:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:11.107 22:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.107 22:56:55 -- host/discovery.sh@55 -- # sort 00:30:11.107 22:56:55 -- common/autotest_common.sh@10 -- # set +x 00:30:11.107 22:56:55 -- host/discovery.sh@55 -- # xargs 00:30:11.107 22:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.107 22:56:55 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:11.107 22:56:55 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:11.107 22:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.107 22:56:55 -- common/autotest_common.sh@10 -- # set +x 00:30:11.107 22:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.107 22:56:55 -- host/discovery.sh@86 -- # get_subsystem_names 00:30:11.107 22:56:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:11.107 22:56:55 -- host/discovery.sh@59 -- # xargs 00:30:11.107 22:56:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:11.107 22:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.107 22:56:55 -- host/discovery.sh@59 -- # sort 00:30:11.107 22:56:55 -- common/autotest_common.sh@10 -- # set +x 00:30:11.107 22:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.107 22:56:55 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:30:11.107 22:56:55 -- host/discovery.sh@87 -- # get_bdev_list 00:30:11.107 22:56:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:11.107 22:56:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:11.107 22:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.107 22:56:55 -- host/discovery.sh@55 -- # sort 00:30:11.107 22:56:55 -- common/autotest_common.sh@10 -- # set +x 00:30:11.107 22:56:55 -- host/discovery.sh@55 -- # xargs 00:30:11.107 22:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.367 22:56:55 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:11.367 22:56:55 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:11.367 22:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.367 22:56:55 -- common/autotest_common.sh@10 -- # set +x 00:30:11.367 [2024-04-15 22:56:55.934119] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.367 22:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.367 22:56:55 -- host/discovery.sh@92 -- # get_subsystem_names 00:30:11.367 22:56:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:11.367 22:56:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:11.368 22:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.368 22:56:55 -- host/discovery.sh@59 -- # sort 00:30:11.368 22:56:55 -- common/autotest_common.sh@10 -- # set +x 00:30:11.368 22:56:55 -- host/discovery.sh@59 -- # xargs 00:30:11.368 22:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.368 22:56:55 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:11.368 22:56:55 -- host/discovery.sh@93 -- # get_bdev_list 00:30:11.368 22:56:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:11.368 22:56:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:11.368 22:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.368 22:56:55 -- host/discovery.sh@55 -- # sort 00:30:11.368 22:56:55 -- common/autotest_common.sh@10 -- # set +x 00:30:11.368 22:56:55 -- host/discovery.sh@55 -- # xargs 00:30:11.368 22:56:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.368 22:56:56 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:30:11.368 22:56:56 -- host/discovery.sh@94 -- # get_notification_count 00:30:11.368 22:56:56 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:11.368 22:56:56 -- host/discovery.sh@74 -- # jq '. | length' 00:30:11.368 22:56:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.368 22:56:56 -- common/autotest_common.sh@10 -- # set +x 00:30:11.368 22:56:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.368 22:56:56 -- host/discovery.sh@74 -- # notification_count=0 00:30:11.368 22:56:56 -- host/discovery.sh@75 -- # notify_id=0 00:30:11.368 22:56:56 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:30:11.368 22:56:56 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:11.368 22:56:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.368 22:56:56 -- common/autotest_common.sh@10 -- # set +x 00:30:11.368 22:56:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.368 22:56:56 -- host/discovery.sh@100 -- # sleep 1 00:30:12.050 [2024-04-15 22:56:56.637628] bdev_nvme.c:6700:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:12.050 [2024-04-15 22:56:56.637648] bdev_nvme.c:6780:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:12.050 [2024-04-15 22:56:56.637663] bdev_nvme.c:6663:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:12.050 [2024-04-15 22:56:56.725946] bdev_nvme.c:6629:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:12.326 [2024-04-15 22:56:56.910707] bdev_nvme.c:6519:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:12.326 [2024-04-15 22:56:56.910732] bdev_nvme.c:6478:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:12.326 22:56:57 -- host/discovery.sh@101 -- # get_subsystem_names 00:30:12.326 22:56:57 -- host/discovery.sh@59 -- # xargs 00:30:12.326 22:56:57 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:12.326 22:56:57 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:12.326 22:56:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:12.326 22:56:57 -- host/discovery.sh@59 -- # sort 00:30:12.326 22:56:57 -- common/autotest_common.sh@10 -- # set +x 00:30:12.326 22:56:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:12.587 22:56:57 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.587 22:56:57 -- host/discovery.sh@102 -- # get_bdev_list 00:30:12.587 22:56:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:12.587 22:56:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:12.587 22:56:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:12.587 22:56:57 -- host/discovery.sh@55 -- # sort 00:30:12.587 22:56:57 -- common/autotest_common.sh@10 -- # set +x 00:30:12.587 22:56:57 -- host/discovery.sh@55 -- # xargs 00:30:12.587 22:56:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:12.587 22:56:57 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:12.587 22:56:57 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:30:12.587 22:56:57 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:12.587 22:56:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:12.587 22:56:57 -- common/autotest_common.sh@10 -- # set +x 00:30:12.587 22:56:57 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:12.587 22:56:57 -- host/discovery.sh@63 -- # sort -n 00:30:12.587 22:56:57 -- host/discovery.sh@63 -- # xargs 00:30:12.587 22:56:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:12.587 22:56:57 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:30:12.587 22:56:57 -- host/discovery.sh@104 -- # get_notification_count 00:30:12.587 22:56:57 -- host/discovery.sh@74 -- # jq '. | length' 00:30:12.587 22:56:57 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:12.587 22:56:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:12.587 22:56:57 -- common/autotest_common.sh@10 -- # set +x 00:30:12.587 22:56:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:12.587 22:56:57 -- host/discovery.sh@74 -- # notification_count=1 00:30:12.587 22:56:57 -- host/discovery.sh@75 -- # notify_id=1 00:30:12.587 22:56:57 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:30:12.587 22:56:57 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:12.587 22:56:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:12.587 22:56:57 -- common/autotest_common.sh@10 -- # set +x 00:30:12.587 22:56:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:12.587 22:56:57 -- host/discovery.sh@109 -- # sleep 1 00:30:13.528 22:56:58 -- host/discovery.sh@110 -- # get_bdev_list 00:30:13.528 22:56:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:13.528 22:56:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:13.528 22:56:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:13.528 22:56:58 -- common/autotest_common.sh@10 -- # set +x 00:30:13.528 22:56:58 -- host/discovery.sh@55 -- # sort 00:30:13.528 22:56:58 -- host/discovery.sh@55 -- # xargs 00:30:13.528 22:56:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:13.528 22:56:58 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:13.528 22:56:58 -- host/discovery.sh@111 -- # get_notification_count 00:30:13.789 22:56:58 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:13.789 22:56:58 -- host/discovery.sh@74 -- # jq '. | length' 00:30:13.789 22:56:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:13.789 22:56:58 -- common/autotest_common.sh@10 -- # set +x 00:30:13.789 22:56:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:13.789 22:56:58 -- host/discovery.sh@74 -- # notification_count=1 00:30:13.789 22:56:58 -- host/discovery.sh@75 -- # notify_id=2 00:30:13.789 22:56:58 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:30:13.789 22:56:58 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:13.789 22:56:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:13.789 22:56:58 -- common/autotest_common.sh@10 -- # set +x 00:30:13.789 [2024-04-15 22:56:58.388885] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:13.789 [2024-04-15 22:56:58.389321] bdev_nvme.c:6682:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:13.789 [2024-04-15 22:56:58.389346] bdev_nvme.c:6663:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:13.789 22:56:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:13.789 22:56:58 -- host/discovery.sh@117 -- # sleep 1 00:30:13.789 [2024-04-15 22:56:58.477629] bdev_nvme.c:6624:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:14.050 [2024-04-15 22:56:58.741955] bdev_nvme.c:6519:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:14.050 [2024-04-15 22:56:58.741972] bdev_nvme.c:6478:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:14.050 [2024-04-15 22:56:58.741977] bdev_nvme.c:6478:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:14.623 22:56:59 -- host/discovery.sh@118 -- # get_subsystem_names 00:30:14.623 22:56:59 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:14.623 22:56:59 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:14.623 22:56:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:14.623 22:56:59 -- host/discovery.sh@59 -- # sort 00:30:14.623 22:56:59 -- common/autotest_common.sh@10 -- # set +x 00:30:14.623 22:56:59 -- host/discovery.sh@59 -- # xargs 00:30:14.623 22:56:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:14.886 22:56:59 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.886 22:56:59 -- host/discovery.sh@119 -- # get_bdev_list 00:30:14.886 22:56:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:14.886 22:56:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:14.886 22:56:59 -- common/autotest_common.sh@10 -- # set +x 00:30:14.886 22:56:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:14.886 22:56:59 -- host/discovery.sh@55 -- # sort 00:30:14.886 22:56:59 -- host/discovery.sh@55 -- # xargs 00:30:14.886 22:56:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:14.886 22:56:59 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:14.886 22:56:59 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:30:14.886 22:56:59 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:14.886 22:56:59 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:14.886 22:56:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:14.886 22:56:59 -- host/discovery.sh@63 -- # sort -n 00:30:14.886 22:56:59 -- common/autotest_common.sh@10 -- # set +x 00:30:14.886 22:56:59 -- host/discovery.sh@63 -- # xargs 00:30:14.886 22:56:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:14.886 22:56:59 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:14.886 22:56:59 -- host/discovery.sh@121 -- # get_notification_count 00:30:14.886 22:56:59 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:14.886 22:56:59 -- host/discovery.sh@74 -- # jq '. | length' 00:30:14.886 22:56:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:14.886 22:56:59 -- common/autotest_common.sh@10 -- # set +x 00:30:14.886 22:56:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:14.886 22:56:59 -- host/discovery.sh@74 -- # notification_count=0 00:30:14.886 22:56:59 -- host/discovery.sh@75 -- # notify_id=2 00:30:14.886 22:56:59 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:30:14.886 22:56:59 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:14.886 22:56:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:14.886 22:56:59 -- common/autotest_common.sh@10 -- # set +x 00:30:14.886 [2024-04-15 22:56:59.604495] bdev_nvme.c:6682:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:14.886 [2024-04-15 22:56:59.604516] bdev_nvme.c:6663:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:14.886 [2024-04-15 22:56:59.604546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.886 [2024-04-15 22:56:59.604564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.886 [2024-04-15 22:56:59.604574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.886 [2024-04-15 22:56:59.604581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.886 [2024-04-15 22:56:59.604590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.886 [2024-04-15 22:56:59.604597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.886 [2024-04-15 22:56:59.604605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.886 [2024-04-15 22:56:59.604612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.886 [2024-04-15 22:56:59.604619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ffb00 is same with the state(5) to be set 00:30:14.887 22:56:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:14.887 22:56:59 -- host/discovery.sh@127 -- # sleep 1 00:30:14.887 [2024-04-15 22:56:59.614553] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ffb00 (9): Bad file descriptor 00:30:14.887 [2024-04-15 22:56:59.624593] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:14.887 [2024-04-15 22:56:59.624998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.887 [2024-04-15 22:56:59.625343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.887 [2024-04-15 22:56:59.625354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ffb00 with addr=10.0.0.2, port=4420 00:30:14.887 [2024-04-15 22:56:59.625362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ffb00 is same with the state(5) to be set 00:30:14.887 [2024-04-15 22:56:59.625374] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ffb00 (9): Bad file descriptor 00:30:14.887 [2024-04-15 22:56:59.625385] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:14.887 [2024-04-15 22:56:59.625391] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:14.887 [2024-04-15 22:56:59.625399] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:14.887 [2024-04-15 22:56:59.625411] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.887 [2024-04-15 22:56:59.634649] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:14.887 [2024-04-15 22:56:59.634952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.887 [2024-04-15 22:56:59.635310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.887 [2024-04-15 22:56:59.635321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ffb00 with addr=10.0.0.2, port=4420 00:30:14.887 [2024-04-15 22:56:59.635328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ffb00 is same with the state(5) to be set 00:30:14.887 [2024-04-15 22:56:59.635339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ffb00 (9): Bad file descriptor 00:30:14.887 [2024-04-15 22:56:59.635349] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:14.887 [2024-04-15 22:56:59.635356] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:14.887 [2024-04-15 22:56:59.635362] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:14.887 [2024-04-15 22:56:59.635373] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.887 [2024-04-15 22:56:59.644699] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:14.887 [2024-04-15 22:56:59.645107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.887 [2024-04-15 22:56:59.645492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.887 [2024-04-15 22:56:59.645503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ffb00 with addr=10.0.0.2, port=4420 00:30:14.887 [2024-04-15 22:56:59.645511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ffb00 is same with the state(5) to be set 00:30:14.887 [2024-04-15 22:56:59.645522] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ffb00 (9): Bad file descriptor 00:30:14.887 [2024-04-15 22:56:59.645532] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:14.887 [2024-04-15 22:56:59.645538] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:14.887 [2024-04-15 22:56:59.645550] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:14.887 [2024-04-15 22:56:59.645560] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.887 [2024-04-15 22:56:59.654752] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:14.887 [2024-04-15 22:56:59.655148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.887 [2024-04-15 22:56:59.655491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.887 [2024-04-15 22:56:59.655502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ffb00 with addr=10.0.0.2, port=4420 00:30:14.887 [2024-04-15 22:56:59.655510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ffb00 is same with the state(5) to be set 00:30:14.887 [2024-04-15 22:56:59.655521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ffb00 (9): Bad file descriptor 00:30:14.887 [2024-04-15 22:56:59.655532] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:14.887 [2024-04-15 22:56:59.655539] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:14.887 [2024-04-15 22:56:59.655554] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:14.887 [2024-04-15 22:56:59.655570] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.887 [2024-04-15 22:56:59.664807] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:14.887 [2024-04-15 22:56:59.665133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.887 [2024-04-15 22:56:59.665507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.887 [2024-04-15 22:56:59.665521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ffb00 with addr=10.0.0.2, port=4420 00:30:14.887 [2024-04-15 22:56:59.665528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ffb00 is same with the state(5) to be set 00:30:14.887 [2024-04-15 22:56:59.665539] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ffb00 (9): Bad file descriptor 00:30:14.887 [2024-04-15 22:56:59.665554] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:14.887 [2024-04-15 22:56:59.665560] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:14.887 [2024-04-15 22:56:59.665567] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:14.887 [2024-04-15 22:56:59.665577] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.887 [2024-04-15 22:56:59.674858] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:14.887 [2024-04-15 22:56:59.675247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.887 [2024-04-15 22:56:59.675631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.887 [2024-04-15 22:56:59.675642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ffb00 with addr=10.0.0.2, port=4420 00:30:14.887 [2024-04-15 22:56:59.675649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ffb00 is same with the state(5) to be set 00:30:14.887 [2024-04-15 22:56:59.675660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ffb00 (9): Bad file descriptor 00:30:14.887 [2024-04-15 22:56:59.675670] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:14.887 [2024-04-15 22:56:59.675676] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:14.887 [2024-04-15 22:56:59.675683] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:14.887 [2024-04-15 22:56:59.675693] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.887 [2024-04-15 22:56:59.684908] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:14.887 [2024-04-15 22:56:59.685728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.887 [2024-04-15 22:56:59.685974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.887 [2024-04-15 22:56:59.685987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ffb00 with addr=10.0.0.2, port=4420 00:30:14.887 [2024-04-15 22:56:59.685996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ffb00 is same with the state(5) to be set 00:30:14.887 [2024-04-15 22:56:59.686011] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ffb00 (9): Bad file descriptor 00:30:14.887 [2024-04-15 22:56:59.686039] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:14.887 [2024-04-15 22:56:59.686047] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:14.887 [2024-04-15 22:56:59.686055] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:14.887 [2024-04-15 22:56:59.686067] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.150 [2024-04-15 22:56:59.694960] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:15.150 [2024-04-15 22:56:59.695353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.150 [2024-04-15 22:56:59.695710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.150 [2024-04-15 22:56:59.695721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ffb00 with addr=10.0.0.2, port=4420 00:30:15.150 [2024-04-15 22:56:59.695732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ffb00 is same with the state(5) to be set 00:30:15.150 [2024-04-15 22:56:59.695743] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ffb00 (9): Bad file descriptor 00:30:15.150 [2024-04-15 22:56:59.695760] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:15.150 [2024-04-15 22:56:59.695767] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:15.150 [2024-04-15 22:56:59.695774] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:15.150 [2024-04-15 22:56:59.695784] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.150 [2024-04-15 22:56:59.705015] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:15.150 [2024-04-15 22:56:59.705368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.150 [2024-04-15 22:56:59.705726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.150 [2024-04-15 22:56:59.705737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ffb00 with addr=10.0.0.2, port=4420 00:30:15.150 [2024-04-15 22:56:59.705744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ffb00 is same with the state(5) to be set 00:30:15.150 [2024-04-15 22:56:59.705755] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ffb00 (9): Bad file descriptor 00:30:15.150 [2024-04-15 22:56:59.705765] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:15.150 [2024-04-15 22:56:59.705771] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:15.150 [2024-04-15 22:56:59.705777] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:15.150 [2024-04-15 22:56:59.705788] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.150 [2024-04-15 22:56:59.715065] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:15.150 [2024-04-15 22:56:59.715464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.150 [2024-04-15 22:56:59.715835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.150 [2024-04-15 22:56:59.715846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ffb00 with addr=10.0.0.2, port=4420 00:30:15.150 [2024-04-15 22:56:59.715853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ffb00 is same with the state(5) to be set 00:30:15.150 [2024-04-15 22:56:59.715864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ffb00 (9): Bad file descriptor 00:30:15.150 [2024-04-15 22:56:59.715880] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:15.150 [2024-04-15 22:56:59.715886] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:15.150 [2024-04-15 22:56:59.715893] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:15.150 [2024-04-15 22:56:59.715904] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.150 [2024-04-15 22:56:59.725116] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:15.150 [2024-04-15 22:56:59.725509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.150 [2024-04-15 22:56:59.725745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.150 [2024-04-15 22:56:59.725755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ffb00 with addr=10.0.0.2, port=4420 00:30:15.150 [2024-04-15 22:56:59.725762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ffb00 is same with the state(5) to be set 00:30:15.150 [2024-04-15 22:56:59.725777] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ffb00 (9): Bad file descriptor 00:30:15.150 [2024-04-15 22:56:59.725788] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:15.150 [2024-04-15 22:56:59.725794] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:15.150 [2024-04-15 22:56:59.725801] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:15.150 [2024-04-15 22:56:59.725812] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.150 [2024-04-15 22:56:59.730932] bdev_nvme.c:6487:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:15.150 [2024-04-15 22:56:59.730949] bdev_nvme.c:6478:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:16.095 22:57:00 -- host/discovery.sh@128 -- # get_subsystem_names 00:30:16.095 22:57:00 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:16.095 22:57:00 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:16.095 22:57:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:16.095 22:57:00 -- host/discovery.sh@59 -- # sort 00:30:16.095 22:57:00 -- common/autotest_common.sh@10 -- # set +x 00:30:16.095 22:57:00 -- host/discovery.sh@59 -- # xargs 00:30:16.095 22:57:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:16.095 22:57:00 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.095 22:57:00 -- host/discovery.sh@129 -- # get_bdev_list 00:30:16.095 22:57:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:16.095 22:57:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:16.095 22:57:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:16.095 22:57:00 -- host/discovery.sh@55 -- # sort 00:30:16.095 22:57:00 -- common/autotest_common.sh@10 -- # set +x 00:30:16.095 22:57:00 -- host/discovery.sh@55 -- # xargs 00:30:16.095 22:57:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:16.095 22:57:00 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:16.095 22:57:00 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:30:16.095 22:57:00 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:16.095 22:57:00 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:16.095 22:57:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:16.095 22:57:00 -- host/discovery.sh@63 -- # sort -n 00:30:16.095 22:57:00 -- common/autotest_common.sh@10 -- # set +x 00:30:16.095 22:57:00 -- host/discovery.sh@63 -- # xargs 00:30:16.095 22:57:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:16.095 22:57:00 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:30:16.095 22:57:00 -- host/discovery.sh@131 -- # get_notification_count 00:30:16.095 22:57:00 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:16.095 22:57:00 -- host/discovery.sh@74 -- # jq '. | length' 00:30:16.095 22:57:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:16.095 22:57:00 -- common/autotest_common.sh@10 -- # set +x 00:30:16.095 22:57:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:16.095 22:57:00 -- host/discovery.sh@74 -- # notification_count=0 00:30:16.095 22:57:00 -- host/discovery.sh@75 -- # notify_id=2 00:30:16.095 22:57:00 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:30:16.095 22:57:00 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:16.095 22:57:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:16.095 22:57:00 -- common/autotest_common.sh@10 -- # set +x 00:30:16.095 22:57:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:16.095 22:57:00 -- host/discovery.sh@135 -- # sleep 1 00:30:17.038 22:57:01 -- host/discovery.sh@136 -- # get_subsystem_names 00:30:17.038 22:57:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:17.298 22:57:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:17.298 22:57:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:17.298 22:57:01 -- common/autotest_common.sh@10 -- # set +x 00:30:17.298 22:57:01 -- host/discovery.sh@59 -- # sort 00:30:17.298 22:57:01 -- host/discovery.sh@59 -- # xargs 00:30:17.298 22:57:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:17.298 22:57:01 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:30:17.298 22:57:01 -- host/discovery.sh@137 -- # get_bdev_list 00:30:17.298 22:57:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:17.298 22:57:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:17.298 22:57:01 -- host/discovery.sh@55 -- # sort 00:30:17.298 22:57:01 -- host/discovery.sh@55 -- # xargs 00:30:17.298 22:57:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:17.298 22:57:01 -- common/autotest_common.sh@10 -- # set +x 00:30:17.298 22:57:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:17.298 22:57:01 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:30:17.298 22:57:01 -- host/discovery.sh@138 -- # get_notification_count 00:30:17.298 22:57:01 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:17.298 22:57:01 -- host/discovery.sh@74 -- # jq '. | length' 00:30:17.298 22:57:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:17.298 22:57:01 -- common/autotest_common.sh@10 -- # set +x 00:30:17.298 22:57:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:17.298 22:57:01 -- host/discovery.sh@74 -- # notification_count=2 00:30:17.298 22:57:01 -- host/discovery.sh@75 -- # notify_id=4 00:30:17.298 22:57:01 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:30:17.298 22:57:01 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:17.298 22:57:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:17.298 22:57:01 -- common/autotest_common.sh@10 -- # set +x 00:30:18.240 [2024-04-15 22:57:03.044760] bdev_nvme.c:6700:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:18.240 [2024-04-15 22:57:03.044781] bdev_nvme.c:6780:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:18.240 [2024-04-15 22:57:03.044794] bdev_nvme.c:6663:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:18.501 [2024-04-15 22:57:03.132070] bdev_nvme.c:6629:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:18.501 [2024-04-15 22:57:03.237145] bdev_nvme.c:6519:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:18.501 [2024-04-15 22:57:03.237177] bdev_nvme.c:6478:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:18.501 22:57:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:18.501 22:57:03 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:18.501 22:57:03 -- common/autotest_common.sh@640 -- # local es=0 00:30:18.501 22:57:03 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:18.501 22:57:03 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:30:18.501 22:57:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:18.501 22:57:03 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:30:18.501 22:57:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:18.501 22:57:03 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:18.501 22:57:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.501 22:57:03 -- common/autotest_common.sh@10 -- # set +x 00:30:18.501 request: 00:30:18.501 { 00:30:18.501 "name": "nvme", 00:30:18.501 "trtype": "tcp", 00:30:18.501 "traddr": "10.0.0.2", 00:30:18.501 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:18.501 "adrfam": "ipv4", 00:30:18.501 "trsvcid": "8009", 00:30:18.501 "wait_for_attach": true, 00:30:18.501 "method": "bdev_nvme_start_discovery", 00:30:18.501 "req_id": 1 00:30:18.501 } 00:30:18.501 Got JSON-RPC error response 00:30:18.501 response: 00:30:18.501 { 00:30:18.501 "code": -17, 00:30:18.501 "message": "File exists" 00:30:18.501 } 00:30:18.501 22:57:03 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:30:18.501 22:57:03 -- common/autotest_common.sh@643 -- # es=1 00:30:18.501 22:57:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:18.501 22:57:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:30:18.501 22:57:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:18.501 22:57:03 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:30:18.501 22:57:03 -- host/discovery.sh@67 -- # xargs 00:30:18.501 22:57:03 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:18.501 22:57:03 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:18.501 22:57:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.501 22:57:03 -- host/discovery.sh@67 -- # sort 00:30:18.501 22:57:03 -- common/autotest_common.sh@10 -- # set +x 00:30:18.501 22:57:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:18.501 22:57:03 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:30:18.763 22:57:03 -- host/discovery.sh@147 -- # get_bdev_list 00:30:18.763 22:57:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:18.763 22:57:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:18.763 22:57:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.763 22:57:03 -- host/discovery.sh@55 -- # sort 00:30:18.763 22:57:03 -- common/autotest_common.sh@10 -- # set +x 00:30:18.763 22:57:03 -- host/discovery.sh@55 -- # xargs 00:30:18.763 22:57:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:18.763 22:57:03 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:18.763 22:57:03 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:18.763 22:57:03 -- common/autotest_common.sh@640 -- # local es=0 00:30:18.763 22:57:03 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:18.763 22:57:03 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:30:18.763 22:57:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:18.763 22:57:03 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:30:18.763 22:57:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:18.763 22:57:03 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:18.763 22:57:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.763 22:57:03 -- common/autotest_common.sh@10 -- # set +x 00:30:18.763 request: 00:30:18.763 { 00:30:18.763 "name": "nvme_second", 00:30:18.763 "trtype": "tcp", 00:30:18.763 "traddr": "10.0.0.2", 00:30:18.763 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:18.763 "adrfam": "ipv4", 00:30:18.763 "trsvcid": "8009", 00:30:18.763 "wait_for_attach": true, 00:30:18.763 "method": "bdev_nvme_start_discovery", 00:30:18.763 "req_id": 1 00:30:18.763 } 00:30:18.763 Got JSON-RPC error response 00:30:18.763 response: 00:30:18.763 { 00:30:18.763 "code": -17, 00:30:18.763 "message": "File exists" 00:30:18.763 } 00:30:18.763 22:57:03 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:30:18.763 22:57:03 -- common/autotest_common.sh@643 -- # es=1 00:30:18.763 22:57:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:18.763 22:57:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:30:18.763 22:57:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:18.763 22:57:03 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:30:18.763 22:57:03 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:18.763 22:57:03 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:18.763 22:57:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.763 22:57:03 -- host/discovery.sh@67 -- # sort 00:30:18.763 22:57:03 -- common/autotest_common.sh@10 -- # set +x 00:30:18.763 22:57:03 -- host/discovery.sh@67 -- # xargs 00:30:18.763 22:57:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:18.763 22:57:03 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:30:18.763 22:57:03 -- host/discovery.sh@153 -- # get_bdev_list 00:30:18.763 22:57:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:18.763 22:57:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:18.763 22:57:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.763 22:57:03 -- host/discovery.sh@55 -- # sort 00:30:18.763 22:57:03 -- common/autotest_common.sh@10 -- # set +x 00:30:18.763 22:57:03 -- host/discovery.sh@55 -- # xargs 00:30:18.763 22:57:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:18.763 22:57:03 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:18.763 22:57:03 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:18.763 22:57:03 -- common/autotest_common.sh@640 -- # local es=0 00:30:18.763 22:57:03 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:18.763 22:57:03 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:30:18.763 22:57:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:18.763 22:57:03 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:30:18.763 22:57:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:18.763 22:57:03 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:18.763 22:57:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.763 22:57:03 -- common/autotest_common.sh@10 -- # set +x 00:30:19.708 [2024-04-15 22:57:04.500535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.708 [2024-04-15 22:57:04.500893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.708 [2024-04-15 22:57:04.500909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1071bd0 with addr=10.0.0.2, port=8010 00:30:19.708 [2024-04-15 22:57:04.500924] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:19.708 [2024-04-15 22:57:04.500932] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:19.708 [2024-04-15 22:57:04.500941] bdev_nvme.c:6762:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:21.095 [2024-04-15 22:57:05.503011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.095 [2024-04-15 22:57:05.503399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.095 [2024-04-15 22:57:05.503412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1071bd0 with addr=10.0.0.2, port=8010 00:30:21.095 [2024-04-15 22:57:05.503424] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:21.095 [2024-04-15 22:57:05.503431] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:21.095 [2024-04-15 22:57:05.503439] bdev_nvme.c:6762:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:22.039 [2024-04-15 22:57:06.504981] bdev_nvme.c:6743:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:22.039 request: 00:30:22.039 { 00:30:22.039 "name": "nvme_second", 00:30:22.039 "trtype": "tcp", 00:30:22.039 "traddr": "10.0.0.2", 00:30:22.039 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:22.039 "adrfam": "ipv4", 00:30:22.039 "trsvcid": "8010", 00:30:22.039 "attach_timeout_ms": 3000, 00:30:22.039 "method": "bdev_nvme_start_discovery", 00:30:22.039 "req_id": 1 00:30:22.039 } 00:30:22.039 Got JSON-RPC error response 00:30:22.039 response: 00:30:22.039 { 00:30:22.039 "code": -110, 00:30:22.039 "message": "Connection timed out" 00:30:22.039 } 00:30:22.039 22:57:06 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:30:22.039 22:57:06 -- common/autotest_common.sh@643 -- # es=1 00:30:22.039 22:57:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:22.039 22:57:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:30:22.039 22:57:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:22.039 22:57:06 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:30:22.039 22:57:06 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:22.039 22:57:06 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:22.039 22:57:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:22.039 22:57:06 -- host/discovery.sh@67 -- # sort 00:30:22.039 22:57:06 -- common/autotest_common.sh@10 -- # set +x 00:30:22.039 22:57:06 -- host/discovery.sh@67 -- # xargs 00:30:22.039 22:57:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:22.039 22:57:06 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:30:22.039 22:57:06 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:30:22.039 22:57:06 -- host/discovery.sh@162 -- # kill 1303459 00:30:22.039 22:57:06 -- host/discovery.sh@163 -- # nvmftestfini 00:30:22.039 22:57:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:22.039 22:57:06 -- nvmf/common.sh@116 -- # sync 00:30:22.039 22:57:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:22.039 22:57:06 -- nvmf/common.sh@119 -- # set +e 00:30:22.039 22:57:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:22.039 22:57:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:22.039 rmmod nvme_tcp 00:30:22.039 rmmod nvme_fabrics 00:30:22.039 rmmod nvme_keyring 00:30:22.039 22:57:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:22.039 22:57:06 -- nvmf/common.sh@123 -- # set -e 00:30:22.039 22:57:06 -- nvmf/common.sh@124 -- # return 0 00:30:22.039 22:57:06 -- nvmf/common.sh@477 -- # '[' -n 1303145 ']' 00:30:22.039 22:57:06 -- nvmf/common.sh@478 -- # killprocess 1303145 00:30:22.039 22:57:06 -- common/autotest_common.sh@926 -- # '[' -z 1303145 ']' 00:30:22.039 22:57:06 -- common/autotest_common.sh@930 -- # kill -0 1303145 00:30:22.039 22:57:06 -- common/autotest_common.sh@931 -- # uname 00:30:22.039 22:57:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:22.039 22:57:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1303145 00:30:22.039 22:57:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:22.039 22:57:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:22.039 22:57:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1303145' 00:30:22.039 killing process with pid 1303145 00:30:22.039 22:57:06 -- common/autotest_common.sh@945 -- # kill 1303145 00:30:22.039 22:57:06 -- common/autotest_common.sh@950 -- # wait 1303145 00:30:22.039 22:57:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:22.039 22:57:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:22.039 22:57:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:22.039 22:57:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:22.039 22:57:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:22.039 22:57:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.039 22:57:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:22.039 22:57:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.585 22:57:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:24.585 00:30:24.585 real 0m23.119s 00:30:24.585 user 0m28.595s 00:30:24.585 sys 0m7.160s 00:30:24.585 22:57:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:24.585 22:57:08 -- common/autotest_common.sh@10 -- # set +x 00:30:24.585 ************************************ 00:30:24.585 END TEST nvmf_discovery 00:30:24.585 ************************************ 00:30:24.585 22:57:08 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:24.585 22:57:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:24.585 22:57:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:24.585 22:57:08 -- common/autotest_common.sh@10 -- # set +x 00:30:24.585 ************************************ 00:30:24.585 START TEST nvmf_discovery_remove_ifc 00:30:24.585 ************************************ 00:30:24.585 22:57:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:24.585 * Looking for test storage... 00:30:24.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:24.585 22:57:09 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:24.585 22:57:09 -- nvmf/common.sh@7 -- # uname -s 00:30:24.585 22:57:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:24.585 22:57:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:24.585 22:57:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:24.585 22:57:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:24.585 22:57:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:24.585 22:57:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:24.585 22:57:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:24.585 22:57:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:24.585 22:57:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:24.585 22:57:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:24.585 22:57:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:24.585 22:57:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:24.585 22:57:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:24.585 22:57:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:24.585 22:57:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:24.585 22:57:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:24.585 22:57:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.585 22:57:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.585 22:57:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.585 22:57:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.585 22:57:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.585 22:57:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.585 22:57:09 -- paths/export.sh@5 -- # export PATH 00:30:24.585 22:57:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.585 22:57:09 -- nvmf/common.sh@46 -- # : 0 00:30:24.585 22:57:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:24.585 22:57:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:24.585 22:57:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:24.585 22:57:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:24.585 22:57:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:24.585 22:57:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:24.585 22:57:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:24.585 22:57:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:24.585 22:57:09 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:24.585 22:57:09 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:24.585 22:57:09 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:24.585 22:57:09 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:24.585 22:57:09 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:24.585 22:57:09 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:24.585 22:57:09 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:24.585 22:57:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:24.585 22:57:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:24.585 22:57:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:24.585 22:57:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:24.585 22:57:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:24.585 22:57:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.585 22:57:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:24.585 22:57:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.585 22:57:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:24.585 22:57:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:24.585 22:57:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:24.585 22:57:09 -- common/autotest_common.sh@10 -- # set +x 00:30:32.726 22:57:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:32.726 22:57:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:32.726 22:57:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:32.726 22:57:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:32.726 22:57:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:32.726 22:57:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:32.726 22:57:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:32.726 22:57:17 -- nvmf/common.sh@294 -- # net_devs=() 00:30:32.726 22:57:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:32.726 22:57:17 -- nvmf/common.sh@295 -- # e810=() 00:30:32.726 22:57:17 -- nvmf/common.sh@295 -- # local -ga e810 00:30:32.726 22:57:17 -- nvmf/common.sh@296 -- # x722=() 00:30:32.726 22:57:17 -- nvmf/common.sh@296 -- # local -ga x722 00:30:32.726 22:57:17 -- nvmf/common.sh@297 -- # mlx=() 00:30:32.726 22:57:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:32.726 22:57:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.726 22:57:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.726 22:57:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.726 22:57:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.726 22:57:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.726 22:57:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.726 22:57:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.726 22:57:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.726 22:57:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.726 22:57:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.726 22:57:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.726 22:57:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:32.726 22:57:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:32.726 22:57:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:32.726 22:57:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:32.726 22:57:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:32.726 22:57:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:32.726 22:57:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:32.726 22:57:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:32.726 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:32.726 22:57:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:32.726 22:57:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:32.726 22:57:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.726 22:57:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.726 22:57:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:32.726 22:57:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:32.726 22:57:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:32.726 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:32.726 22:57:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:32.726 22:57:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:32.726 22:57:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.726 22:57:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.726 22:57:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:32.726 22:57:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:32.726 22:57:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:32.726 22:57:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:32.726 22:57:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:32.726 22:57:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.726 22:57:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:32.726 22:57:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.726 22:57:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:32.726 Found net devices under 0000:31:00.0: cvl_0_0 00:30:32.726 22:57:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.726 22:57:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:32.726 22:57:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.726 22:57:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:32.726 22:57:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.726 22:57:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:32.726 Found net devices under 0000:31:00.1: cvl_0_1 00:30:32.726 22:57:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.726 22:57:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:32.726 22:57:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:32.726 22:57:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:32.727 22:57:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:32.727 22:57:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:32.727 22:57:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.727 22:57:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:32.727 22:57:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:32.727 22:57:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:32.727 22:57:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:32.727 22:57:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:32.727 22:57:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:32.727 22:57:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:32.727 22:57:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.727 22:57:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:32.727 22:57:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:32.727 22:57:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:32.727 22:57:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:32.727 22:57:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:32.727 22:57:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:32.727 22:57:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:32.727 22:57:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:32.727 22:57:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:32.727 22:57:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:32.727 22:57:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:32.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:32.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:30:32.727 00:30:32.727 --- 10.0.0.2 ping statistics --- 00:30:32.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.727 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:30:32.727 22:57:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:32.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:32.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.462 ms 00:30:32.727 00:30:32.727 --- 10.0.0.1 ping statistics --- 00:30:32.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.727 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:30:32.727 22:57:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:32.727 22:57:17 -- nvmf/common.sh@410 -- # return 0 00:30:32.727 22:57:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:32.727 22:57:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:32.727 22:57:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:32.727 22:57:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:32.727 22:57:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:32.727 22:57:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:32.727 22:57:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:32.727 22:57:17 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:30:32.727 22:57:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:32.727 22:57:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:32.727 22:57:17 -- common/autotest_common.sh@10 -- # set +x 00:30:32.727 22:57:17 -- nvmf/common.sh@469 -- # nvmfpid=1310985 00:30:32.727 22:57:17 -- nvmf/common.sh@470 -- # waitforlisten 1310985 00:30:32.727 22:57:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:32.727 22:57:17 -- common/autotest_common.sh@819 -- # '[' -z 1310985 ']' 00:30:32.727 22:57:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.727 22:57:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:32.727 22:57:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.727 22:57:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:32.727 22:57:17 -- common/autotest_common.sh@10 -- # set +x 00:30:32.727 [2024-04-15 22:57:17.442652] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:30:32.727 [2024-04-15 22:57:17.442719] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.727 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.727 [2024-04-15 22:57:17.520103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.988 [2024-04-15 22:57:17.591721] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:32.988 [2024-04-15 22:57:17.591852] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.988 [2024-04-15 22:57:17.591860] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.988 [2024-04-15 22:57:17.591868] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.988 [2024-04-15 22:57:17.591885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.560 22:57:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:33.560 22:57:18 -- common/autotest_common.sh@852 -- # return 0 00:30:33.560 22:57:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:33.560 22:57:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:33.560 22:57:18 -- common/autotest_common.sh@10 -- # set +x 00:30:33.560 22:57:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:33.560 22:57:18 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:30:33.560 22:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:33.560 22:57:18 -- common/autotest_common.sh@10 -- # set +x 00:30:33.560 [2024-04-15 22:57:18.258710] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.560 [2024-04-15 22:57:18.266822] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:33.560 null0 00:30:33.560 [2024-04-15 22:57:18.298860] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:33.560 22:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:33.560 22:57:18 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1311317 00:30:33.560 22:57:18 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1311317 /tmp/host.sock 00:30:33.560 22:57:18 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:33.560 22:57:18 -- common/autotest_common.sh@819 -- # '[' -z 1311317 ']' 00:30:33.560 22:57:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:30:33.560 22:57:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:33.560 22:57:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:33.560 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:33.560 22:57:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:33.560 22:57:18 -- common/autotest_common.sh@10 -- # set +x 00:30:33.560 [2024-04-15 22:57:18.366873] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:30:33.560 [2024-04-15 22:57:18.366921] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1311317 ] 00:30:33.822 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.822 [2024-04-15 22:57:18.431619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.822 [2024-04-15 22:57:18.497582] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:33.822 [2024-04-15 22:57:18.497706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.394 22:57:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:34.394 22:57:19 -- common/autotest_common.sh@852 -- # return 0 00:30:34.394 22:57:19 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:34.394 22:57:19 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:34.394 22:57:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:34.394 22:57:19 -- common/autotest_common.sh@10 -- # set +x 00:30:34.394 22:57:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:34.394 22:57:19 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:34.394 22:57:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:34.394 22:57:19 -- common/autotest_common.sh@10 -- # set +x 00:30:34.394 22:57:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:34.394 22:57:19 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:34.394 22:57:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:34.394 22:57:19 -- common/autotest_common.sh@10 -- # set +x 00:30:35.807 [2024-04-15 22:57:20.248748] bdev_nvme.c:6700:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:35.807 [2024-04-15 22:57:20.248776] bdev_nvme.c:6780:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:35.807 [2024-04-15 22:57:20.248790] bdev_nvme.c:6663:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:35.807 [2024-04-15 22:57:20.337068] bdev_nvme.c:6629:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:35.807 [2024-04-15 22:57:20.397428] bdev_nvme.c:7489:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:35.807 [2024-04-15 22:57:20.397470] bdev_nvme.c:7489:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:35.807 [2024-04-15 22:57:20.397490] bdev_nvme.c:7489:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:35.807 [2024-04-15 22:57:20.397505] bdev_nvme.c:6519:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:35.807 [2024-04-15 22:57:20.397526] bdev_nvme.c:6478:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:35.807 22:57:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:35.807 22:57:20 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:35.807 22:57:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:35.807 22:57:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:35.807 22:57:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:35.807 22:57:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:35.807 22:57:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:35.807 22:57:20 -- common/autotest_common.sh@10 -- # set +x 00:30:35.807 22:57:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:35.807 [2024-04-15 22:57:20.406497] bdev_nvme.c:1581:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2189120 was disconnected and freed. delete nvme_qpair. 00:30:35.807 22:57:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:35.807 22:57:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:35.807 22:57:20 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:35.807 22:57:20 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:35.807 22:57:20 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:35.807 22:57:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:35.807 22:57:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:35.807 22:57:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:35.807 22:57:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:35.807 22:57:20 -- common/autotest_common.sh@10 -- # set +x 00:30:35.807 22:57:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:35.807 22:57:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:35.807 22:57:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:36.068 22:57:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:36.068 22:57:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:37.012 22:57:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:37.012 22:57:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:37.012 22:57:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:37.012 22:57:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:37.012 22:57:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:37.012 22:57:21 -- common/autotest_common.sh@10 -- # set +x 00:30:37.012 22:57:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:37.012 22:57:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:37.012 22:57:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:37.012 22:57:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:37.970 22:57:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:37.970 22:57:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:37.970 22:57:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:37.970 22:57:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:37.970 22:57:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:37.970 22:57:22 -- common/autotest_common.sh@10 -- # set +x 00:30:37.970 22:57:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:37.970 22:57:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:37.970 22:57:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:37.970 22:57:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:39.356 22:57:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:39.356 22:57:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:39.356 22:57:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:39.356 22:57:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:39.356 22:57:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:39.356 22:57:23 -- common/autotest_common.sh@10 -- # set +x 00:30:39.356 22:57:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:39.356 22:57:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:39.356 22:57:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:39.356 22:57:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:40.299 22:57:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:40.299 22:57:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:40.299 22:57:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:40.299 22:57:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:40.299 22:57:24 -- common/autotest_common.sh@10 -- # set +x 00:30:40.299 22:57:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:40.299 22:57:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:40.299 22:57:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:40.299 22:57:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:40.299 22:57:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:41.242 [2024-04-15 22:57:25.838033] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:41.242 [2024-04-15 22:57:25.838076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.242 [2024-04-15 22:57:25.838088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.242 [2024-04-15 22:57:25.838098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.242 [2024-04-15 22:57:25.838106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.243 [2024-04-15 22:57:25.838114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.243 [2024-04-15 22:57:25.838121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.243 [2024-04-15 22:57:25.838129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.243 [2024-04-15 22:57:25.838136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.243 [2024-04-15 22:57:25.838145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.243 [2024-04-15 22:57:25.838152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.243 [2024-04-15 22:57:25.838159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f790 is same with the state(5) to be set 00:30:41.243 [2024-04-15 22:57:25.848054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214f790 (9): Bad file descriptor 00:30:41.243 [2024-04-15 22:57:25.858094] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:41.243 22:57:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:41.243 22:57:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:41.243 22:57:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:41.243 22:57:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:41.243 22:57:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:41.243 22:57:25 -- common/autotest_common.sh@10 -- # set +x 00:30:41.243 22:57:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:42.185 [2024-04-15 22:57:26.905633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:43.127 [2024-04-15 22:57:27.929574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:43.127 [2024-04-15 22:57:27.929629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x214f790 with addr=10.0.0.2, port=4420 00:30:43.127 [2024-04-15 22:57:27.929645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f790 is same with the state(5) to be set 00:30:43.127 [2024-04-15 22:57:27.930035] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214f790 (9): Bad file descriptor 00:30:43.127 [2024-04-15 22:57:27.930060] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:43.127 [2024-04-15 22:57:27.930081] bdev_nvme.c:6451:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:43.127 [2024-04-15 22:57:27.930106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.127 [2024-04-15 22:57:27.930117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.128 [2024-04-15 22:57:27.930128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.128 [2024-04-15 22:57:27.930135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.128 [2024-04-15 22:57:27.930143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.128 [2024-04-15 22:57:27.930150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.128 [2024-04-15 22:57:27.930158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.128 [2024-04-15 22:57:27.930166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.128 [2024-04-15 22:57:27.930174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.128 [2024-04-15 22:57:27.930181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.128 [2024-04-15 22:57:27.930188] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:30:43.128 [2024-04-15 22:57:27.930672] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214fba0 (9): Bad file descriptor 00:30:43.128 [2024-04-15 22:57:27.931685] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:43.128 [2024-04-15 22:57:27.931697] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:30:43.389 22:57:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:43.389 22:57:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:43.389 22:57:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:44.332 22:57:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:44.332 22:57:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:44.332 22:57:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:44.332 22:57:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:44.332 22:57:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:44.332 22:57:28 -- common/autotest_common.sh@10 -- # set +x 00:30:44.332 22:57:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:44.332 22:57:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:44.332 22:57:29 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:44.332 22:57:29 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:44.332 22:57:29 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:44.332 22:57:29 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:44.332 22:57:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:44.332 22:57:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:44.332 22:57:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:44.332 22:57:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:44.332 22:57:29 -- common/autotest_common.sh@10 -- # set +x 00:30:44.332 22:57:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:44.332 22:57:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:44.333 22:57:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:44.594 22:57:29 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:44.594 22:57:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:45.538 [2024-04-15 22:57:29.983693] bdev_nvme.c:6700:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:45.538 [2024-04-15 22:57:29.983716] bdev_nvme.c:6780:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:45.538 [2024-04-15 22:57:29.983730] bdev_nvme.c:6663:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:45.538 [2024-04-15 22:57:30.114514] bdev_nvme.c:6629:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:45.538 22:57:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:45.538 22:57:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:45.538 22:57:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:45.538 22:57:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.538 22:57:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:45.538 22:57:30 -- common/autotest_common.sh@10 -- # set +x 00:30:45.538 22:57:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:45.538 22:57:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.538 22:57:30 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:45.538 22:57:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:45.538 [2024-04-15 22:57:30.337885] bdev_nvme.c:7489:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:45.538 [2024-04-15 22:57:30.337926] bdev_nvme.c:7489:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:45.538 [2024-04-15 22:57:30.337946] bdev_nvme.c:7489:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:45.538 [2024-04-15 22:57:30.337960] bdev_nvme.c:6519:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:45.538 [2024-04-15 22:57:30.337969] bdev_nvme.c:6478:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:45.538 [2024-04-15 22:57:30.341513] bdev_nvme.c:1581:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2137ae0 was disconnected and freed. delete nvme_qpair. 00:30:46.480 22:57:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:46.480 22:57:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:46.480 22:57:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:46.480 22:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:46.480 22:57:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:46.480 22:57:31 -- common/autotest_common.sh@10 -- # set +x 00:30:46.480 22:57:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:46.480 22:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:46.480 22:57:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:46.480 22:57:31 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:46.480 22:57:31 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1311317 00:30:46.480 22:57:31 -- common/autotest_common.sh@926 -- # '[' -z 1311317 ']' 00:30:46.480 22:57:31 -- common/autotest_common.sh@930 -- # kill -0 1311317 00:30:46.480 22:57:31 -- common/autotest_common.sh@931 -- # uname 00:30:46.480 22:57:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:46.480 22:57:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1311317 00:30:46.741 22:57:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:46.741 22:57:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:46.741 22:57:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1311317' 00:30:46.741 killing process with pid 1311317 00:30:46.741 22:57:31 -- common/autotest_common.sh@945 -- # kill 1311317 00:30:46.741 22:57:31 -- common/autotest_common.sh@950 -- # wait 1311317 00:30:46.741 22:57:31 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:46.741 22:57:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:46.741 22:57:31 -- nvmf/common.sh@116 -- # sync 00:30:46.741 22:57:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:46.741 22:57:31 -- nvmf/common.sh@119 -- # set +e 00:30:46.741 22:57:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:46.741 22:57:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:46.741 rmmod nvme_tcp 00:30:46.741 rmmod nvme_fabrics 00:30:46.741 rmmod nvme_keyring 00:30:46.741 22:57:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:46.741 22:57:31 -- nvmf/common.sh@123 -- # set -e 00:30:46.741 22:57:31 -- nvmf/common.sh@124 -- # return 0 00:30:46.741 22:57:31 -- nvmf/common.sh@477 -- # '[' -n 1310985 ']' 00:30:46.741 22:57:31 -- nvmf/common.sh@478 -- # killprocess 1310985 00:30:46.741 22:57:31 -- common/autotest_common.sh@926 -- # '[' -z 1310985 ']' 00:30:46.741 22:57:31 -- common/autotest_common.sh@930 -- # kill -0 1310985 00:30:46.741 22:57:31 -- common/autotest_common.sh@931 -- # uname 00:30:46.741 22:57:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:46.741 22:57:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1310985 00:30:47.002 22:57:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:47.002 22:57:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:47.002 22:57:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1310985' 00:30:47.002 killing process with pid 1310985 00:30:47.002 22:57:31 -- common/autotest_common.sh@945 -- # kill 1310985 00:30:47.002 22:57:31 -- common/autotest_common.sh@950 -- # wait 1310985 00:30:47.002 22:57:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:47.002 22:57:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:47.002 22:57:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:47.002 22:57:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:47.002 22:57:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:47.002 22:57:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.002 22:57:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:47.002 22:57:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.560 22:57:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:49.560 00:30:49.560 real 0m24.832s 00:30:49.560 user 0m28.151s 00:30:49.560 sys 0m7.379s 00:30:49.560 22:57:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:49.560 22:57:33 -- common/autotest_common.sh@10 -- # set +x 00:30:49.560 ************************************ 00:30:49.560 END TEST nvmf_discovery_remove_ifc 00:30:49.560 ************************************ 00:30:49.560 22:57:33 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:30:49.560 22:57:33 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:49.560 22:57:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:49.560 22:57:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:49.560 22:57:33 -- common/autotest_common.sh@10 -- # set +x 00:30:49.560 ************************************ 00:30:49.560 START TEST nvmf_digest 00:30:49.560 ************************************ 00:30:49.560 22:57:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:49.560 * Looking for test storage... 00:30:49.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:49.560 22:57:33 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:49.560 22:57:33 -- nvmf/common.sh@7 -- # uname -s 00:30:49.560 22:57:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:49.560 22:57:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:49.561 22:57:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:49.561 22:57:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:49.561 22:57:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:49.561 22:57:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:49.561 22:57:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:49.561 22:57:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:49.561 22:57:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:49.561 22:57:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:49.561 22:57:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:49.561 22:57:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:49.561 22:57:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:49.561 22:57:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:49.561 22:57:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:49.561 22:57:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:49.561 22:57:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:49.561 22:57:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.561 22:57:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.561 22:57:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.561 22:57:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.561 22:57:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.561 22:57:33 -- paths/export.sh@5 -- # export PATH 00:30:49.561 22:57:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.561 22:57:33 -- nvmf/common.sh@46 -- # : 0 00:30:49.561 22:57:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:49.561 22:57:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:49.561 22:57:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:49.561 22:57:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:49.561 22:57:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:49.561 22:57:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:49.561 22:57:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:49.561 22:57:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:49.561 22:57:33 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:49.561 22:57:33 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:49.561 22:57:33 -- host/digest.sh@16 -- # runtime=2 00:30:49.561 22:57:33 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:30:49.561 22:57:33 -- host/digest.sh@132 -- # nvmftestinit 00:30:49.561 22:57:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:49.561 22:57:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:49.561 22:57:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:49.561 22:57:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:49.561 22:57:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:49.561 22:57:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.561 22:57:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:49.561 22:57:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.561 22:57:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:49.561 22:57:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:49.561 22:57:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:49.561 22:57:33 -- common/autotest_common.sh@10 -- # set +x 00:30:57.706 22:57:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:57.706 22:57:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:57.706 22:57:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:57.706 22:57:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:57.706 22:57:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:57.706 22:57:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:57.706 22:57:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:57.706 22:57:41 -- nvmf/common.sh@294 -- # net_devs=() 00:30:57.706 22:57:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:57.706 22:57:41 -- nvmf/common.sh@295 -- # e810=() 00:30:57.706 22:57:41 -- nvmf/common.sh@295 -- # local -ga e810 00:30:57.706 22:57:41 -- nvmf/common.sh@296 -- # x722=() 00:30:57.706 22:57:41 -- nvmf/common.sh@296 -- # local -ga x722 00:30:57.706 22:57:41 -- nvmf/common.sh@297 -- # mlx=() 00:30:57.706 22:57:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:57.706 22:57:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:57.706 22:57:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:57.706 22:57:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:57.706 22:57:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:57.706 22:57:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:57.706 22:57:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:57.706 22:57:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:57.706 22:57:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:57.706 22:57:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:57.706 22:57:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:57.706 22:57:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:57.706 22:57:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:57.706 22:57:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:57.706 22:57:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:57.706 22:57:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:57.706 22:57:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:57.706 22:57:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:57.706 22:57:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:57.706 22:57:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:57.706 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:57.706 22:57:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:57.706 22:57:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:57.706 22:57:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.706 22:57:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.706 22:57:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:57.706 22:57:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:57.706 22:57:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:57.706 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:57.706 22:57:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:57.706 22:57:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:57.706 22:57:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.706 22:57:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.706 22:57:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:57.706 22:57:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:57.706 22:57:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:57.706 22:57:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:57.706 22:57:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:57.706 22:57:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.706 22:57:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:57.706 22:57:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.706 22:57:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:57.706 Found net devices under 0000:31:00.0: cvl_0_0 00:30:57.706 22:57:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.706 22:57:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:57.706 22:57:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.706 22:57:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:57.706 22:57:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.706 22:57:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:57.706 Found net devices under 0000:31:00.1: cvl_0_1 00:30:57.706 22:57:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.706 22:57:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:57.706 22:57:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:57.706 22:57:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:57.706 22:57:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:57.706 22:57:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:57.706 22:57:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:57.706 22:57:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:57.706 22:57:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:57.706 22:57:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:57.706 22:57:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:57.706 22:57:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:57.706 22:57:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:57.706 22:57:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:57.706 22:57:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:57.706 22:57:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:57.706 22:57:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:57.706 22:57:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:57.706 22:57:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:57.706 22:57:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:57.706 22:57:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:57.706 22:57:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:57.706 22:57:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:57.706 22:57:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:57.706 22:57:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:57.706 22:57:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:57.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:57.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:30:57.706 00:30:57.706 --- 10.0.0.2 ping statistics --- 00:30:57.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.706 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:30:57.706 22:57:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:57.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:57.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:30:57.706 00:30:57.706 --- 10.0.0.1 ping statistics --- 00:30:57.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.706 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:30:57.706 22:57:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:57.706 22:57:41 -- nvmf/common.sh@410 -- # return 0 00:30:57.706 22:57:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:57.706 22:57:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:57.706 22:57:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:57.706 22:57:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:57.706 22:57:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:57.706 22:57:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:57.706 22:57:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:57.706 22:57:41 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:57.706 22:57:41 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:30:57.706 22:57:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:57.706 22:57:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:57.706 22:57:41 -- common/autotest_common.sh@10 -- # set +x 00:30:57.706 ************************************ 00:30:57.706 START TEST nvmf_digest_clean 00:30:57.706 ************************************ 00:30:57.706 22:57:41 -- common/autotest_common.sh@1104 -- # run_digest 00:30:57.707 22:57:41 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:30:57.707 22:57:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:57.707 22:57:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:57.707 22:57:41 -- common/autotest_common.sh@10 -- # set +x 00:30:57.707 22:57:41 -- nvmf/common.sh@469 -- # nvmfpid=1318468 00:30:57.707 22:57:41 -- nvmf/common.sh@470 -- # waitforlisten 1318468 00:30:57.707 22:57:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:57.707 22:57:41 -- common/autotest_common.sh@819 -- # '[' -z 1318468 ']' 00:30:57.707 22:57:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.707 22:57:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:57.707 22:57:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.707 22:57:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:57.707 22:57:41 -- common/autotest_common.sh@10 -- # set +x 00:30:57.707 [2024-04-15 22:57:41.954262] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:30:57.707 [2024-04-15 22:57:41.954315] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.707 EAL: No free 2048 kB hugepages reported on node 1 00:30:57.707 [2024-04-15 22:57:42.026697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.707 [2024-04-15 22:57:42.088443] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:57.707 [2024-04-15 22:57:42.088572] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.707 [2024-04-15 22:57:42.088581] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.707 [2024-04-15 22:57:42.088588] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.707 [2024-04-15 22:57:42.088614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.967 22:57:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:57.967 22:57:42 -- common/autotest_common.sh@852 -- # return 0 00:30:57.967 22:57:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:57.967 22:57:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:57.967 22:57:42 -- common/autotest_common.sh@10 -- # set +x 00:30:57.967 22:57:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:57.967 22:57:42 -- host/digest.sh@120 -- # common_target_config 00:30:57.967 22:57:42 -- host/digest.sh@43 -- # rpc_cmd 00:30:57.967 22:57:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:57.967 22:57:42 -- common/autotest_common.sh@10 -- # set +x 00:30:58.228 null0 00:30:58.228 [2024-04-15 22:57:42.819940] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.228 [2024-04-15 22:57:42.844117] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.228 22:57:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:58.228 22:57:42 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:30:58.228 22:57:42 -- host/digest.sh@77 -- # local rw bs qd 00:30:58.228 22:57:42 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:58.228 22:57:42 -- host/digest.sh@80 -- # rw=randread 00:30:58.228 22:57:42 -- host/digest.sh@80 -- # bs=4096 00:30:58.228 22:57:42 -- host/digest.sh@80 -- # qd=128 00:30:58.228 22:57:42 -- host/digest.sh@82 -- # bperfpid=1318671 00:30:58.228 22:57:42 -- host/digest.sh@83 -- # waitforlisten 1318671 /var/tmp/bperf.sock 00:30:58.228 22:57:42 -- common/autotest_common.sh@819 -- # '[' -z 1318671 ']' 00:30:58.228 22:57:42 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:58.228 22:57:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:58.228 22:57:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:58.228 22:57:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:58.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:58.228 22:57:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:58.228 22:57:42 -- common/autotest_common.sh@10 -- # set +x 00:30:58.228 [2024-04-15 22:57:42.895259] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:30:58.228 [2024-04-15 22:57:42.895305] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1318671 ] 00:30:58.228 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.228 [2024-04-15 22:57:42.959057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.228 [2024-04-15 22:57:43.021265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.171 22:57:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:59.171 22:57:43 -- common/autotest_common.sh@852 -- # return 0 00:30:59.171 22:57:43 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:59.171 22:57:43 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:59.171 22:57:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:59.171 22:57:43 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:59.171 22:57:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:59.432 nvme0n1 00:30:59.432 22:57:44 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:59.432 22:57:44 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:59.432 Running I/O for 2 seconds... 00:31:02.013 00:31:02.013 Latency(us) 00:31:02.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.013 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:02.013 nvme0n1 : 2.01 16768.82 65.50 0.00 0.00 7627.50 2280.11 20534.61 00:31:02.013 =================================================================================================================== 00:31:02.013 Total : 16768.82 65.50 0.00 0.00 7627.50 2280.11 20534.61 00:31:02.013 0 00:31:02.013 22:57:46 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:02.013 22:57:46 -- host/digest.sh@92 -- # get_accel_stats 00:31:02.013 22:57:46 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:02.013 22:57:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:02.013 22:57:46 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:02.013 | select(.opcode=="crc32c") 00:31:02.013 | "\(.module_name) \(.executed)"' 00:31:02.013 22:57:46 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:02.013 22:57:46 -- host/digest.sh@93 -- # exp_module=software 00:31:02.013 22:57:46 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:02.013 22:57:46 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:02.013 22:57:46 -- host/digest.sh@97 -- # killprocess 1318671 00:31:02.013 22:57:46 -- common/autotest_common.sh@926 -- # '[' -z 1318671 ']' 00:31:02.013 22:57:46 -- common/autotest_common.sh@930 -- # kill -0 1318671 00:31:02.013 22:57:46 -- common/autotest_common.sh@931 -- # uname 00:31:02.013 22:57:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:02.013 22:57:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1318671 00:31:02.013 22:57:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:02.013 22:57:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:02.013 22:57:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1318671' 00:31:02.013 killing process with pid 1318671 00:31:02.013 22:57:46 -- common/autotest_common.sh@945 -- # kill 1318671 00:31:02.013 Received shutdown signal, test time was about 2.000000 seconds 00:31:02.013 00:31:02.013 Latency(us) 00:31:02.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.013 =================================================================================================================== 00:31:02.013 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:02.013 22:57:46 -- common/autotest_common.sh@950 -- # wait 1318671 00:31:02.013 22:57:46 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:31:02.013 22:57:46 -- host/digest.sh@77 -- # local rw bs qd 00:31:02.013 22:57:46 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:02.013 22:57:46 -- host/digest.sh@80 -- # rw=randread 00:31:02.013 22:57:46 -- host/digest.sh@80 -- # bs=131072 00:31:02.013 22:57:46 -- host/digest.sh@80 -- # qd=16 00:31:02.013 22:57:46 -- host/digest.sh@82 -- # bperfpid=1319418 00:31:02.013 22:57:46 -- host/digest.sh@83 -- # waitforlisten 1319418 /var/tmp/bperf.sock 00:31:02.013 22:57:46 -- common/autotest_common.sh@819 -- # '[' -z 1319418 ']' 00:31:02.013 22:57:46 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:02.013 22:57:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:02.013 22:57:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:02.013 22:57:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:02.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:02.013 22:57:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:02.013 22:57:46 -- common/autotest_common.sh@10 -- # set +x 00:31:02.013 [2024-04-15 22:57:46.598786] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:31:02.013 [2024-04-15 22:57:46.598838] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1319418 ] 00:31:02.013 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:02.013 Zero copy mechanism will not be used. 00:31:02.013 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.013 [2024-04-15 22:57:46.662977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.013 [2024-04-15 22:57:46.724501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.584 22:57:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:02.584 22:57:47 -- common/autotest_common.sh@852 -- # return 0 00:31:02.584 22:57:47 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:02.584 22:57:47 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:02.584 22:57:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:02.844 22:57:47 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:02.844 22:57:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:03.113 nvme0n1 00:31:03.379 22:57:47 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:03.379 22:57:47 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:03.379 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:03.379 Zero copy mechanism will not be used. 00:31:03.379 Running I/O for 2 seconds... 00:31:05.292 00:31:05.292 Latency(us) 00:31:05.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.292 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:05.292 nvme0n1 : 2.00 2709.70 338.71 0.00 0.00 5900.98 1351.68 16056.32 00:31:05.292 =================================================================================================================== 00:31:05.292 Total : 2709.70 338.71 0.00 0.00 5900.98 1351.68 16056.32 00:31:05.292 0 00:31:05.292 22:57:50 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:05.292 22:57:50 -- host/digest.sh@92 -- # get_accel_stats 00:31:05.292 22:57:50 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:05.292 22:57:50 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:05.292 | select(.opcode=="crc32c") 00:31:05.292 | "\(.module_name) \(.executed)"' 00:31:05.292 22:57:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:05.553 22:57:50 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:05.553 22:57:50 -- host/digest.sh@93 -- # exp_module=software 00:31:05.553 22:57:50 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:05.553 22:57:50 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:05.553 22:57:50 -- host/digest.sh@97 -- # killprocess 1319418 00:31:05.553 22:57:50 -- common/autotest_common.sh@926 -- # '[' -z 1319418 ']' 00:31:05.553 22:57:50 -- common/autotest_common.sh@930 -- # kill -0 1319418 00:31:05.553 22:57:50 -- common/autotest_common.sh@931 -- # uname 00:31:05.553 22:57:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:05.553 22:57:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1319418 00:31:05.553 22:57:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:05.553 22:57:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:05.553 22:57:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1319418' 00:31:05.553 killing process with pid 1319418 00:31:05.553 22:57:50 -- common/autotest_common.sh@945 -- # kill 1319418 00:31:05.553 Received shutdown signal, test time was about 2.000000 seconds 00:31:05.553 00:31:05.553 Latency(us) 00:31:05.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.553 =================================================================================================================== 00:31:05.553 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:05.553 22:57:50 -- common/autotest_common.sh@950 -- # wait 1319418 00:31:05.814 22:57:50 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:31:05.814 22:57:50 -- host/digest.sh@77 -- # local rw bs qd 00:31:05.814 22:57:50 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:05.814 22:57:50 -- host/digest.sh@80 -- # rw=randwrite 00:31:05.814 22:57:50 -- host/digest.sh@80 -- # bs=4096 00:31:05.814 22:57:50 -- host/digest.sh@80 -- # qd=128 00:31:05.814 22:57:50 -- host/digest.sh@82 -- # bperfpid=1320202 00:31:05.814 22:57:50 -- host/digest.sh@83 -- # waitforlisten 1320202 /var/tmp/bperf.sock 00:31:05.814 22:57:50 -- common/autotest_common.sh@819 -- # '[' -z 1320202 ']' 00:31:05.814 22:57:50 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:05.814 22:57:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:05.814 22:57:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:05.814 22:57:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:05.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:05.814 22:57:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:05.814 22:57:50 -- common/autotest_common.sh@10 -- # set +x 00:31:05.814 [2024-04-15 22:57:50.417516] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:31:05.814 [2024-04-15 22:57:50.417575] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1320202 ] 00:31:05.814 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.814 [2024-04-15 22:57:50.481550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.814 [2024-04-15 22:57:50.543048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.385 22:57:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:06.385 22:57:51 -- common/autotest_common.sh@852 -- # return 0 00:31:06.385 22:57:51 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:06.385 22:57:51 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:06.385 22:57:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:06.645 22:57:51 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:06.645 22:57:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:06.906 nvme0n1 00:31:06.906 22:57:51 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:06.906 22:57:51 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:07.167 Running I/O for 2 seconds... 00:31:09.081 00:31:09.081 Latency(us) 00:31:09.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.081 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:09.081 nvme0n1 : 2.01 21888.38 85.50 0.00 0.00 5836.51 4096.00 10485.76 00:31:09.081 =================================================================================================================== 00:31:09.081 Total : 21888.38 85.50 0.00 0.00 5836.51 4096.00 10485.76 00:31:09.081 0 00:31:09.081 22:57:53 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:09.081 22:57:53 -- host/digest.sh@92 -- # get_accel_stats 00:31:09.081 22:57:53 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:09.081 22:57:53 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:09.081 | select(.opcode=="crc32c") 00:31:09.081 | "\(.module_name) \(.executed)"' 00:31:09.081 22:57:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:09.341 22:57:53 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:09.341 22:57:53 -- host/digest.sh@93 -- # exp_module=software 00:31:09.341 22:57:53 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:09.341 22:57:53 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:09.341 22:57:53 -- host/digest.sh@97 -- # killprocess 1320202 00:31:09.341 22:57:53 -- common/autotest_common.sh@926 -- # '[' -z 1320202 ']' 00:31:09.341 22:57:53 -- common/autotest_common.sh@930 -- # kill -0 1320202 00:31:09.341 22:57:53 -- common/autotest_common.sh@931 -- # uname 00:31:09.341 22:57:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:09.341 22:57:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1320202 00:31:09.341 22:57:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:09.341 22:57:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:09.341 22:57:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1320202' 00:31:09.341 killing process with pid 1320202 00:31:09.341 22:57:53 -- common/autotest_common.sh@945 -- # kill 1320202 00:31:09.341 Received shutdown signal, test time was about 2.000000 seconds 00:31:09.341 00:31:09.341 Latency(us) 00:31:09.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.341 =================================================================================================================== 00:31:09.341 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:09.341 22:57:53 -- common/autotest_common.sh@950 -- # wait 1320202 00:31:09.341 22:57:54 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:31:09.341 22:57:54 -- host/digest.sh@77 -- # local rw bs qd 00:31:09.341 22:57:54 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:09.341 22:57:54 -- host/digest.sh@80 -- # rw=randwrite 00:31:09.341 22:57:54 -- host/digest.sh@80 -- # bs=131072 00:31:09.341 22:57:54 -- host/digest.sh@80 -- # qd=16 00:31:09.341 22:57:54 -- host/digest.sh@82 -- # bperfpid=1320896 00:31:09.341 22:57:54 -- host/digest.sh@83 -- # waitforlisten 1320896 /var/tmp/bperf.sock 00:31:09.341 22:57:54 -- common/autotest_common.sh@819 -- # '[' -z 1320896 ']' 00:31:09.341 22:57:54 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:09.341 22:57:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:09.341 22:57:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:09.341 22:57:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:09.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:09.341 22:57:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:09.341 22:57:54 -- common/autotest_common.sh@10 -- # set +x 00:31:09.341 [2024-04-15 22:57:54.127152] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:31:09.341 [2024-04-15 22:57:54.127208] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1320896 ] 00:31:09.341 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:09.341 Zero copy mechanism will not be used. 00:31:09.602 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.602 [2024-04-15 22:57:54.191002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.602 [2024-04-15 22:57:54.252499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:10.173 22:57:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:10.173 22:57:54 -- common/autotest_common.sh@852 -- # return 0 00:31:10.173 22:57:54 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:10.173 22:57:54 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:10.173 22:57:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:10.434 22:57:55 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:10.434 22:57:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:10.694 nvme0n1 00:31:10.694 22:57:55 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:10.694 22:57:55 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:10.694 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:10.694 Zero copy mechanism will not be used. 00:31:10.694 Running I/O for 2 seconds... 00:31:13.238 00:31:13.238 Latency(us) 00:31:13.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.238 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:13.238 nvme0n1 : 2.00 5184.12 648.01 0.00 0.00 3081.75 1563.31 12997.97 00:31:13.238 =================================================================================================================== 00:31:13.238 Total : 5184.12 648.01 0.00 0.00 3081.75 1563.31 12997.97 00:31:13.238 0 00:31:13.238 22:57:57 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:13.238 22:57:57 -- host/digest.sh@92 -- # get_accel_stats 00:31:13.238 22:57:57 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:13.238 22:57:57 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:13.238 | select(.opcode=="crc32c") 00:31:13.238 | "\(.module_name) \(.executed)"' 00:31:13.238 22:57:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:13.238 22:57:57 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:13.238 22:57:57 -- host/digest.sh@93 -- # exp_module=software 00:31:13.238 22:57:57 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:13.238 22:57:57 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:13.238 22:57:57 -- host/digest.sh@97 -- # killprocess 1320896 00:31:13.238 22:57:57 -- common/autotest_common.sh@926 -- # '[' -z 1320896 ']' 00:31:13.238 22:57:57 -- common/autotest_common.sh@930 -- # kill -0 1320896 00:31:13.238 22:57:57 -- common/autotest_common.sh@931 -- # uname 00:31:13.238 22:57:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:13.239 22:57:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1320896 00:31:13.239 22:57:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:13.239 22:57:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:13.239 22:57:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1320896' 00:31:13.239 killing process with pid 1320896 00:31:13.239 22:57:57 -- common/autotest_common.sh@945 -- # kill 1320896 00:31:13.239 Received shutdown signal, test time was about 2.000000 seconds 00:31:13.239 00:31:13.239 Latency(us) 00:31:13.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.239 =================================================================================================================== 00:31:13.239 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:13.239 22:57:57 -- common/autotest_common.sh@950 -- # wait 1320896 00:31:13.239 22:57:57 -- host/digest.sh@126 -- # killprocess 1318468 00:31:13.239 22:57:57 -- common/autotest_common.sh@926 -- # '[' -z 1318468 ']' 00:31:13.239 22:57:57 -- common/autotest_common.sh@930 -- # kill -0 1318468 00:31:13.239 22:57:57 -- common/autotest_common.sh@931 -- # uname 00:31:13.239 22:57:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:13.239 22:57:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1318468 00:31:13.239 22:57:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:13.239 22:57:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:13.239 22:57:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1318468' 00:31:13.239 killing process with pid 1318468 00:31:13.239 22:57:57 -- common/autotest_common.sh@945 -- # kill 1318468 00:31:13.239 22:57:57 -- common/autotest_common.sh@950 -- # wait 1318468 00:31:13.239 00:31:13.239 real 0m16.110s 00:31:13.239 user 0m31.234s 00:31:13.239 sys 0m3.523s 00:31:13.239 22:57:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:13.239 22:57:58 -- common/autotest_common.sh@10 -- # set +x 00:31:13.239 ************************************ 00:31:13.239 END TEST nvmf_digest_clean 00:31:13.239 ************************************ 00:31:13.239 22:57:58 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:31:13.239 22:57:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:13.239 22:57:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:13.500 22:57:58 -- common/autotest_common.sh@10 -- # set +x 00:31:13.500 ************************************ 00:31:13.500 START TEST nvmf_digest_error 00:31:13.500 ************************************ 00:31:13.500 22:57:58 -- common/autotest_common.sh@1104 -- # run_digest_error 00:31:13.500 22:57:58 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:31:13.500 22:57:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:13.500 22:57:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:13.500 22:57:58 -- common/autotest_common.sh@10 -- # set +x 00:31:13.500 22:57:58 -- nvmf/common.sh@469 -- # nvmfpid=1321614 00:31:13.500 22:57:58 -- nvmf/common.sh@470 -- # waitforlisten 1321614 00:31:13.500 22:57:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:13.500 22:57:58 -- common/autotest_common.sh@819 -- # '[' -z 1321614 ']' 00:31:13.500 22:57:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.500 22:57:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:13.500 22:57:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.500 22:57:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:13.500 22:57:58 -- common/autotest_common.sh@10 -- # set +x 00:31:13.500 [2024-04-15 22:57:58.117540] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:31:13.500 [2024-04-15 22:57:58.117609] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.500 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.500 [2024-04-15 22:57:58.189092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.500 [2024-04-15 22:57:58.251483] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:13.500 [2024-04-15 22:57:58.251609] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.500 [2024-04-15 22:57:58.251617] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.500 [2024-04-15 22:57:58.251625] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.500 [2024-04-15 22:57:58.251649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.167 22:57:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:14.167 22:57:58 -- common/autotest_common.sh@852 -- # return 0 00:31:14.167 22:57:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:14.167 22:57:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:14.167 22:57:58 -- common/autotest_common.sh@10 -- # set +x 00:31:14.167 22:57:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.167 22:57:58 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:14.167 22:57:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.167 22:57:58 -- common/autotest_common.sh@10 -- # set +x 00:31:14.168 [2024-04-15 22:57:58.909517] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:14.168 22:57:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.168 22:57:58 -- host/digest.sh@104 -- # common_target_config 00:31:14.168 22:57:58 -- host/digest.sh@43 -- # rpc_cmd 00:31:14.168 22:57:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.168 22:57:58 -- common/autotest_common.sh@10 -- # set +x 00:31:14.459 null0 00:31:14.459 [2024-04-15 22:57:58.986727] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.459 [2024-04-15 22:57:59.010904] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.459 22:57:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.459 22:57:59 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:31:14.459 22:57:59 -- host/digest.sh@54 -- # local rw bs qd 00:31:14.459 22:57:59 -- host/digest.sh@56 -- # rw=randread 00:31:14.459 22:57:59 -- host/digest.sh@56 -- # bs=4096 00:31:14.459 22:57:59 -- host/digest.sh@56 -- # qd=128 00:31:14.459 22:57:59 -- host/digest.sh@58 -- # bperfpid=1321895 00:31:14.459 22:57:59 -- host/digest.sh@60 -- # waitforlisten 1321895 /var/tmp/bperf.sock 00:31:14.459 22:57:59 -- common/autotest_common.sh@819 -- # '[' -z 1321895 ']' 00:31:14.459 22:57:59 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:14.459 22:57:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:14.459 22:57:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:14.459 22:57:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:14.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:14.459 22:57:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:14.459 22:57:59 -- common/autotest_common.sh@10 -- # set +x 00:31:14.459 [2024-04-15 22:57:59.071366] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:31:14.459 [2024-04-15 22:57:59.071416] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1321895 ] 00:31:14.459 EAL: No free 2048 kB hugepages reported on node 1 00:31:14.459 [2024-04-15 22:57:59.135520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.459 [2024-04-15 22:57:59.198772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.032 22:57:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:15.032 22:57:59 -- common/autotest_common.sh@852 -- # return 0 00:31:15.032 22:57:59 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:15.032 22:57:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:15.293 22:57:59 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:15.293 22:57:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.293 22:57:59 -- common/autotest_common.sh@10 -- # set +x 00:31:15.293 22:57:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.293 22:57:59 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:15.293 22:57:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:15.553 nvme0n1 00:31:15.815 22:58:00 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:15.815 22:58:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.815 22:58:00 -- common/autotest_common.sh@10 -- # set +x 00:31:15.815 22:58:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.815 22:58:00 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:15.815 22:58:00 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:15.815 Running I/O for 2 seconds... 00:31:15.815 [2024-04-15 22:58:00.492769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:15.815 [2024-04-15 22:58:00.492807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.815 [2024-04-15 22:58:00.492819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.815 [2024-04-15 22:58:00.503948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:15.815 [2024-04-15 22:58:00.503972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.815 [2024-04-15 22:58:00.503982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.815 [2024-04-15 22:58:00.518915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:15.815 [2024-04-15 22:58:00.518938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.815 [2024-04-15 22:58:00.518948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.815 [2024-04-15 22:58:00.535366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:15.815 [2024-04-15 22:58:00.535389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.815 [2024-04-15 22:58:00.535398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.815 [2024-04-15 22:58:00.548983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:15.815 [2024-04-15 22:58:00.549007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.815 [2024-04-15 22:58:00.549016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.815 [2024-04-15 22:58:00.562195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:15.815 [2024-04-15 22:58:00.562216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.815 [2024-04-15 22:58:00.562225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.815 [2024-04-15 22:58:00.571911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:15.815 [2024-04-15 22:58:00.571933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.815 [2024-04-15 22:58:00.571947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.815 [2024-04-15 22:58:00.587415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:15.815 [2024-04-15 22:58:00.587437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.815 [2024-04-15 22:58:00.587446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.815 [2024-04-15 22:58:00.602641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:15.815 [2024-04-15 22:58:00.602663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.815 [2024-04-15 22:58:00.602672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.815 [2024-04-15 22:58:00.618775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:15.815 [2024-04-15 22:58:00.618796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.815 [2024-04-15 22:58:00.618805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.077 [2024-04-15 22:58:00.634191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.077 [2024-04-15 22:58:00.634214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.077 [2024-04-15 22:58:00.634223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.077 [2024-04-15 22:58:00.650704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.077 [2024-04-15 22:58:00.650726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.077 [2024-04-15 22:58:00.650735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.077 [2024-04-15 22:58:00.666161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.077 [2024-04-15 22:58:00.666182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.077 [2024-04-15 22:58:00.666191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.077 [2024-04-15 22:58:00.677417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.077 [2024-04-15 22:58:00.677439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.077 [2024-04-15 22:58:00.677448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.077 [2024-04-15 22:58:00.693564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.077 [2024-04-15 22:58:00.693585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.077 [2024-04-15 22:58:00.693594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.077 [2024-04-15 22:58:00.709243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.077 [2024-04-15 22:58:00.709269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.077 [2024-04-15 22:58:00.709278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.077 [2024-04-15 22:58:00.724830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.077 [2024-04-15 22:58:00.724852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.077 [2024-04-15 22:58:00.724860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.077 [2024-04-15 22:58:00.739919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.077 [2024-04-15 22:58:00.739940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.077 [2024-04-15 22:58:00.739948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.077 [2024-04-15 22:58:00.756313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.077 [2024-04-15 22:58:00.756334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.077 [2024-04-15 22:58:00.756343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.077 [2024-04-15 22:58:00.772570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.077 [2024-04-15 22:58:00.772591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.077 [2024-04-15 22:58:00.772600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.077 [2024-04-15 22:58:00.787881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.077 [2024-04-15 22:58:00.787902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.077 [2024-04-15 22:58:00.787911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.077 [2024-04-15 22:58:00.799832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.077 [2024-04-15 22:58:00.799853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.077 [2024-04-15 22:58:00.799862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.077 [2024-04-15 22:58:00.810846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.077 [2024-04-15 22:58:00.810867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.077 [2024-04-15 22:58:00.810876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.077 [2024-04-15 22:58:00.822258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.077 [2024-04-15 22:58:00.822279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.077 [2024-04-15 22:58:00.822289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.077 [2024-04-15 22:58:00.833435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.077 [2024-04-15 22:58:00.833456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.077 [2024-04-15 22:58:00.833465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.077 [2024-04-15 22:58:00.845183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.077 [2024-04-15 22:58:00.845204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.077 [2024-04-15 22:58:00.845212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.077 [2024-04-15 22:58:00.856221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.077 [2024-04-15 22:58:00.856242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.077 [2024-04-15 22:58:00.856251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.077 [2024-04-15 22:58:00.868141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.077 [2024-04-15 22:58:00.868161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.077 [2024-04-15 22:58:00.868171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.077 [2024-04-15 22:58:00.879000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.077 [2024-04-15 22:58:00.879021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.077 [2024-04-15 22:58:00.879030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.339 [2024-04-15 22:58:00.890626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.339 [2024-04-15 22:58:00.890647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.339 [2024-04-15 22:58:00.890656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.339 [2024-04-15 22:58:00.901972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.339 [2024-04-15 22:58:00.901994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.339 [2024-04-15 22:58:00.902004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.339 [2024-04-15 22:58:00.913521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.339 [2024-04-15 22:58:00.913547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.339 [2024-04-15 22:58:00.913556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.339 [2024-04-15 22:58:00.924816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.339 [2024-04-15 22:58:00.924837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.339 [2024-04-15 22:58:00.924850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.339 [2024-04-15 22:58:00.935650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.339 [2024-04-15 22:58:00.935671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.339 [2024-04-15 22:58:00.935681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.339 [2024-04-15 22:58:00.947506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.339 [2024-04-15 22:58:00.947527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.339 [2024-04-15 22:58:00.947535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.339 [2024-04-15 22:58:00.958530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.339 [2024-04-15 22:58:00.958555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.339 [2024-04-15 22:58:00.958564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.339 [2024-04-15 22:58:00.970283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.339 [2024-04-15 22:58:00.970304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.339 [2024-04-15 22:58:00.970312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.339 [2024-04-15 22:58:00.981515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.339 [2024-04-15 22:58:00.981536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.339 [2024-04-15 22:58:00.981549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.339 [2024-04-15 22:58:00.992365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.339 [2024-04-15 22:58:00.992387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.339 [2024-04-15 22:58:00.992395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.339 [2024-04-15 22:58:01.004178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.339 [2024-04-15 22:58:01.004199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.339 [2024-04-15 22:58:01.004207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.339 [2024-04-15 22:58:01.015425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.339 [2024-04-15 22:58:01.015446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.339 [2024-04-15 22:58:01.015455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.339 [2024-04-15 22:58:01.026959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.339 [2024-04-15 22:58:01.026980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.340 [2024-04-15 22:58:01.026989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.340 [2024-04-15 22:58:01.038221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.340 [2024-04-15 22:58:01.038242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.340 [2024-04-15 22:58:01.038251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.340 [2024-04-15 22:58:01.049166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.340 [2024-04-15 22:58:01.049186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.340 [2024-04-15 22:58:01.049195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.340 [2024-04-15 22:58:01.061257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.340 [2024-04-15 22:58:01.061279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.340 [2024-04-15 22:58:01.061288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.340 [2024-04-15 22:58:01.072223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.340 [2024-04-15 22:58:01.072243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.340 [2024-04-15 22:58:01.072252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.340 [2024-04-15 22:58:01.084157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.340 [2024-04-15 22:58:01.084178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.340 [2024-04-15 22:58:01.084187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.340 [2024-04-15 22:58:01.095073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.340 [2024-04-15 22:58:01.095094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.340 [2024-04-15 22:58:01.095103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.340 [2024-04-15 22:58:01.106239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.340 [2024-04-15 22:58:01.106260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.340 [2024-04-15 22:58:01.106268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.340 [2024-04-15 22:58:01.117769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.340 [2024-04-15 22:58:01.117790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.340 [2024-04-15 22:58:01.117806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.340 [2024-04-15 22:58:01.129032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.340 [2024-04-15 22:58:01.129053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.340 [2024-04-15 22:58:01.129061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.340 [2024-04-15 22:58:01.140676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.340 [2024-04-15 22:58:01.140697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.340 [2024-04-15 22:58:01.140705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.151888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.151909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.151918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.163697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.163717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.163726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.174494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.174514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.174523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.186424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.186445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.186454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.197408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.197428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.197437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.209084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.209104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.209113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.219952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.219975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.219984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.231798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.231818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.231826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.242595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.242616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.242625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.254421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.254442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.254450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.265634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.265654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.265663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.277432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.277453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.277462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.288352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.288372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.288381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.299621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.299642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.299651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.311706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.311727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.311736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.322518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.322538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.322552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.334229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.334250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.334258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.345341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.345361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.345369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.357183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.357203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.357212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.368241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.368261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.368270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.379826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.379847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.379856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.390757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.390777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.601 [2024-04-15 22:58:01.390785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.601 [2024-04-15 22:58:01.402440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.601 [2024-04-15 22:58:01.402461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.602 [2024-04-15 22:58:01.402470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.413755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.413776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.413788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.425001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.425022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.425031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.435945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.435965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.435974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.447657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.447678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.447687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.458691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.458711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.458720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.470456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.470475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.470484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.481484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.481505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.481513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.492600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.492620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.492629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.504289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.504309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.504317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.515560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.515585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.515594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.527528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.527552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.527561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.538539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.538563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.538572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.549702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.549723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.549732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.561333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.561353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.561362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.572510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.572530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.572539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.583612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.583632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.583641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.595405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.595426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.595434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.606407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.606427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.606436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.617616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.617636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.617645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.629457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.629477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.629486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.640572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.640592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.640601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.652274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.652294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.652303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.863 [2024-04-15 22:58:01.663298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:16.863 [2024-04-15 22:58:01.663319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.863 [2024-04-15 22:58:01.663328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.124 [2024-04-15 22:58:01.674427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.124 [2024-04-15 22:58:01.674447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.124 [2024-04-15 22:58:01.674456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.124 [2024-04-15 22:58:01.686371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.124 [2024-04-15 22:58:01.686392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.124 [2024-04-15 22:58:01.686401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.124 [2024-04-15 22:58:01.697185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.124 [2024-04-15 22:58:01.697205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.124 [2024-04-15 22:58:01.697214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.124 [2024-04-15 22:58:01.708980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.124 [2024-04-15 22:58:01.709000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.124 [2024-04-15 22:58:01.709013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.124 [2024-04-15 22:58:01.720282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.124 [2024-04-15 22:58:01.720303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.124 [2024-04-15 22:58:01.720312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.124 [2024-04-15 22:58:01.731393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.124 [2024-04-15 22:58:01.731413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.124 [2024-04-15 22:58:01.731422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.124 [2024-04-15 22:58:01.743194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.124 [2024-04-15 22:58:01.743214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.124 [2024-04-15 22:58:01.743223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.124 [2024-04-15 22:58:01.754169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.124 [2024-04-15 22:58:01.754189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.124 [2024-04-15 22:58:01.754198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.124 [2024-04-15 22:58:01.766092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.124 [2024-04-15 22:58:01.766112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.124 [2024-04-15 22:58:01.766121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.124 [2024-04-15 22:58:01.777145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.124 [2024-04-15 22:58:01.777165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.124 [2024-04-15 22:58:01.777173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.124 [2024-04-15 22:58:01.788201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.124 [2024-04-15 22:58:01.788221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.124 [2024-04-15 22:58:01.788230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.125 [2024-04-15 22:58:01.799880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.125 [2024-04-15 22:58:01.799900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.125 [2024-04-15 22:58:01.799909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.125 [2024-04-15 22:58:01.810966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.125 [2024-04-15 22:58:01.810986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.125 [2024-04-15 22:58:01.810995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.125 [2024-04-15 22:58:01.822768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.125 [2024-04-15 22:58:01.822790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.125 [2024-04-15 22:58:01.822799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.125 [2024-04-15 22:58:01.834106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.125 [2024-04-15 22:58:01.834127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.125 [2024-04-15 22:58:01.834137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.125 [2024-04-15 22:58:01.845129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.125 [2024-04-15 22:58:01.845150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.125 [2024-04-15 22:58:01.845158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.125 [2024-04-15 22:58:01.856664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.125 [2024-04-15 22:58:01.856684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.125 [2024-04-15 22:58:01.856693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.125 [2024-04-15 22:58:01.867740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.125 [2024-04-15 22:58:01.867761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.125 [2024-04-15 22:58:01.867770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.125 [2024-04-15 22:58:01.879438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.125 [2024-04-15 22:58:01.879459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.125 [2024-04-15 22:58:01.879468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.125 [2024-04-15 22:58:01.890221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.125 [2024-04-15 22:58:01.890242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.125 [2024-04-15 22:58:01.890251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.125 [2024-04-15 22:58:01.902152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.125 [2024-04-15 22:58:01.902172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.125 [2024-04-15 22:58:01.902184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.125 [2024-04-15 22:58:01.913217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.125 [2024-04-15 22:58:01.913237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.125 [2024-04-15 22:58:01.913246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.125 [2024-04-15 22:58:01.925113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.125 [2024-04-15 22:58:01.925134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.125 [2024-04-15 22:58:01.925142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.386 [2024-04-15 22:58:01.936413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.386 [2024-04-15 22:58:01.936435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.386 [2024-04-15 22:58:01.936444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.386 [2024-04-15 22:58:01.947248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.386 [2024-04-15 22:58:01.947269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.386 [2024-04-15 22:58:01.947278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.386 [2024-04-15 22:58:01.958947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.386 [2024-04-15 22:58:01.958967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.386 [2024-04-15 22:58:01.958976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.387 [2024-04-15 22:58:01.970034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.387 [2024-04-15 22:58:01.970056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.387 [2024-04-15 22:58:01.970065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.387 [2024-04-15 22:58:01.981566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.387 [2024-04-15 22:58:01.981587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.387 [2024-04-15 22:58:01.981596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.387 [2024-04-15 22:58:01.994073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.387 [2024-04-15 22:58:01.994095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.387 [2024-04-15 22:58:01.994105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.387 [2024-04-15 22:58:02.006567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.387 [2024-04-15 22:58:02.006594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.387 [2024-04-15 22:58:02.006603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.387 [2024-04-15 22:58:02.016945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.387 [2024-04-15 22:58:02.016967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.387 [2024-04-15 22:58:02.016977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.387 [2024-04-15 22:58:02.032108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.387 [2024-04-15 22:58:02.032129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.387 [2024-04-15 22:58:02.032138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.387 [2024-04-15 22:58:02.048216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.387 [2024-04-15 22:58:02.048238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.387 [2024-04-15 22:58:02.048248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.387 [2024-04-15 22:58:02.064928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.387 [2024-04-15 22:58:02.064949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.387 [2024-04-15 22:58:02.064958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.387 [2024-04-15 22:58:02.080390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.387 [2024-04-15 22:58:02.080412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.387 [2024-04-15 22:58:02.080421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.387 [2024-04-15 22:58:02.096584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.387 [2024-04-15 22:58:02.096605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.387 [2024-04-15 22:58:02.096614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.387 [2024-04-15 22:58:02.112097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.387 [2024-04-15 22:58:02.112118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.387 [2024-04-15 22:58:02.112127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.387 [2024-04-15 22:58:02.126847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.387 [2024-04-15 22:58:02.126868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.387 [2024-04-15 22:58:02.126876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.387 [2024-04-15 22:58:02.137915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.387 [2024-04-15 22:58:02.137937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.387 [2024-04-15 22:58:02.137945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.387 [2024-04-15 22:58:02.153763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.387 [2024-04-15 22:58:02.153785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.387 [2024-04-15 22:58:02.153794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.387 [2024-04-15 22:58:02.170320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.387 [2024-04-15 22:58:02.170342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.387 [2024-04-15 22:58:02.170351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.387 [2024-04-15 22:58:02.185935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.387 [2024-04-15 22:58:02.185957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.387 [2024-04-15 22:58:02.185966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.659 [2024-04-15 22:58:02.202235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.659 [2024-04-15 22:58:02.202256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.659 [2024-04-15 22:58:02.202265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.659 [2024-04-15 22:58:02.218293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.659 [2024-04-15 22:58:02.218315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.659 [2024-04-15 22:58:02.218323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.659 [2024-04-15 22:58:02.233330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.659 [2024-04-15 22:58:02.233351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.659 [2024-04-15 22:58:02.233360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.659 [2024-04-15 22:58:02.248134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.659 [2024-04-15 22:58:02.248155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.659 [2024-04-15 22:58:02.248164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.659 [2024-04-15 22:58:02.264532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.659 [2024-04-15 22:58:02.264557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.659 [2024-04-15 22:58:02.264570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.659 [2024-04-15 22:58:02.278525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.659 [2024-04-15 22:58:02.278549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.659 [2024-04-15 22:58:02.278559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.659 [2024-04-15 22:58:02.289804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.659 [2024-04-15 22:58:02.289825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.659 [2024-04-15 22:58:02.289834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.659 [2024-04-15 22:58:02.305652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.659 [2024-04-15 22:58:02.305672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.659 [2024-04-15 22:58:02.305681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.659 [2024-04-15 22:58:02.320644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.659 [2024-04-15 22:58:02.320666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.659 [2024-04-15 22:58:02.320674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.659 [2024-04-15 22:58:02.337052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.659 [2024-04-15 22:58:02.337074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.659 [2024-04-15 22:58:02.337082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.659 [2024-04-15 22:58:02.353112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.659 [2024-04-15 22:58:02.353133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.659 [2024-04-15 22:58:02.353143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.659 [2024-04-15 22:58:02.368917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.659 [2024-04-15 22:58:02.368938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.659 [2024-04-15 22:58:02.368947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.659 [2024-04-15 22:58:02.385230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.659 [2024-04-15 22:58:02.385251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.659 [2024-04-15 22:58:02.385260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.659 [2024-04-15 22:58:02.400881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.659 [2024-04-15 22:58:02.400906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.659 [2024-04-15 22:58:02.400915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.659 [2024-04-15 22:58:02.415009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.659 [2024-04-15 22:58:02.415029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.659 [2024-04-15 22:58:02.415038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.659 [2024-04-15 22:58:02.425950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.659 [2024-04-15 22:58:02.425970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.659 [2024-04-15 22:58:02.425979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.659 [2024-04-15 22:58:02.441372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.659 [2024-04-15 22:58:02.441392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.659 [2024-04-15 22:58:02.441401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.659 [2024-04-15 22:58:02.458025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.659 [2024-04-15 22:58:02.458047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.659 [2024-04-15 22:58:02.458056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.920 [2024-04-15 22:58:02.474211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fd070) 00:31:17.920 [2024-04-15 22:58:02.474233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.920 [2024-04-15 22:58:02.474242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.920 00:31:17.920 Latency(us) 00:31:17.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.920 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:17.920 nvme0n1 : 2.01 20279.55 79.22 0.00 0.00 6304.10 2471.25 21408.43 00:31:17.920 =================================================================================================================== 00:31:17.920 Total : 20279.55 79.22 0.00 0.00 6304.10 2471.25 21408.43 00:31:17.920 0 00:31:17.920 22:58:02 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:17.920 22:58:02 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:17.920 | .driver_specific 00:31:17.920 | .nvme_error 00:31:17.920 | .status_code 00:31:17.920 | .command_transient_transport_error' 00:31:17.920 22:58:02 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:17.920 22:58:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:17.920 22:58:02 -- host/digest.sh@71 -- # (( 159 > 0 )) 00:31:17.920 22:58:02 -- host/digest.sh@73 -- # killprocess 1321895 00:31:17.920 22:58:02 -- common/autotest_common.sh@926 -- # '[' -z 1321895 ']' 00:31:17.920 22:58:02 -- common/autotest_common.sh@930 -- # kill -0 1321895 00:31:17.920 22:58:02 -- common/autotest_common.sh@931 -- # uname 00:31:17.920 22:58:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:17.920 22:58:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1321895 00:31:17.920 22:58:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:17.920 22:58:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:17.920 22:58:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1321895' 00:31:17.920 killing process with pid 1321895 00:31:17.920 22:58:02 -- common/autotest_common.sh@945 -- # kill 1321895 00:31:17.920 Received shutdown signal, test time was about 2.000000 seconds 00:31:17.920 00:31:17.920 Latency(us) 00:31:17.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.920 =================================================================================================================== 00:31:17.920 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:17.920 22:58:02 -- common/autotest_common.sh@950 -- # wait 1321895 00:31:18.181 22:58:02 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:31:18.181 22:58:02 -- host/digest.sh@54 -- # local rw bs qd 00:31:18.181 22:58:02 -- host/digest.sh@56 -- # rw=randread 00:31:18.181 22:58:02 -- host/digest.sh@56 -- # bs=131072 00:31:18.181 22:58:02 -- host/digest.sh@56 -- # qd=16 00:31:18.181 22:58:02 -- host/digest.sh@58 -- # bperfpid=1322654 00:31:18.181 22:58:02 -- host/digest.sh@60 -- # waitforlisten 1322654 /var/tmp/bperf.sock 00:31:18.181 22:58:02 -- common/autotest_common.sh@819 -- # '[' -z 1322654 ']' 00:31:18.181 22:58:02 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:18.181 22:58:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:18.181 22:58:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:18.181 22:58:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:18.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:18.181 22:58:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:18.181 22:58:02 -- common/autotest_common.sh@10 -- # set +x 00:31:18.181 [2024-04-15 22:58:02.889885] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:31:18.181 [2024-04-15 22:58:02.889938] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1322654 ] 00:31:18.181 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:18.181 Zero copy mechanism will not be used. 00:31:18.181 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.181 [2024-04-15 22:58:02.954354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.442 [2024-04-15 22:58:03.015590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.013 22:58:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:19.013 22:58:03 -- common/autotest_common.sh@852 -- # return 0 00:31:19.013 22:58:03 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:19.013 22:58:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:19.013 22:58:03 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:19.013 22:58:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:19.013 22:58:03 -- common/autotest_common.sh@10 -- # set +x 00:31:19.273 22:58:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:19.273 22:58:03 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:19.273 22:58:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:19.534 nvme0n1 00:31:19.534 22:58:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:19.534 22:58:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:19.534 22:58:04 -- common/autotest_common.sh@10 -- # set +x 00:31:19.534 22:58:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:19.534 22:58:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:19.534 22:58:04 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:19.534 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:19.534 Zero copy mechanism will not be used. 00:31:19.534 Running I/O for 2 seconds... 00:31:19.534 [2024-04-15 22:58:04.324637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.534 [2024-04-15 22:58:04.324674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.534 [2024-04-15 22:58:04.324686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:19.534 [2024-04-15 22:58:04.336143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.534 [2024-04-15 22:58:04.336170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.534 [2024-04-15 22:58:04.336179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:19.796 [2024-04-15 22:58:04.347436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.796 [2024-04-15 22:58:04.347462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.796 [2024-04-15 22:58:04.347472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:19.796 [2024-04-15 22:58:04.358744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.796 [2024-04-15 22:58:04.358767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.796 [2024-04-15 22:58:04.358777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.796 [2024-04-15 22:58:04.370116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.796 [2024-04-15 22:58:04.370140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.796 [2024-04-15 22:58:04.370149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:19.796 [2024-04-15 22:58:04.381250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.796 [2024-04-15 22:58:04.381272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.796 [2024-04-15 22:58:04.381281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:19.796 [2024-04-15 22:58:04.393512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.796 [2024-04-15 22:58:04.393535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.796 [2024-04-15 22:58:04.393558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:19.796 [2024-04-15 22:58:04.405412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.796 [2024-04-15 22:58:04.405433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.797 [2024-04-15 22:58:04.405450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.797 [2024-04-15 22:58:04.417792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.797 [2024-04-15 22:58:04.417813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.797 [2024-04-15 22:58:04.417822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:19.797 [2024-04-15 22:58:04.430509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.797 [2024-04-15 22:58:04.430531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.797 [2024-04-15 22:58:04.430540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:19.797 [2024-04-15 22:58:04.441715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.797 [2024-04-15 22:58:04.441736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.797 [2024-04-15 22:58:04.441745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:19.797 [2024-04-15 22:58:04.453331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.797 [2024-04-15 22:58:04.453353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.797 [2024-04-15 22:58:04.453362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.797 [2024-04-15 22:58:04.465254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.797 [2024-04-15 22:58:04.465275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.797 [2024-04-15 22:58:04.465284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:19.797 [2024-04-15 22:58:04.476503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.797 [2024-04-15 22:58:04.476525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.797 [2024-04-15 22:58:04.476534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:19.797 [2024-04-15 22:58:04.487969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.797 [2024-04-15 22:58:04.487991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.797 [2024-04-15 22:58:04.488000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:19.797 [2024-04-15 22:58:04.500040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.797 [2024-04-15 22:58:04.500062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.797 [2024-04-15 22:58:04.500071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.797 [2024-04-15 22:58:04.512816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.797 [2024-04-15 22:58:04.512838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.797 [2024-04-15 22:58:04.512847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:19.797 [2024-04-15 22:58:04.524313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.797 [2024-04-15 22:58:04.524335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.797 [2024-04-15 22:58:04.524343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:19.797 [2024-04-15 22:58:04.535766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.797 [2024-04-15 22:58:04.535788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.797 [2024-04-15 22:58:04.535797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:19.797 [2024-04-15 22:58:04.546452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.797 [2024-04-15 22:58:04.546473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.797 [2024-04-15 22:58:04.546482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.797 [2024-04-15 22:58:04.556996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.797 [2024-04-15 22:58:04.557017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.797 [2024-04-15 22:58:04.557026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:19.797 [2024-04-15 22:58:04.568086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.797 [2024-04-15 22:58:04.568107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.797 [2024-04-15 22:58:04.568116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:19.797 [2024-04-15 22:58:04.579619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.797 [2024-04-15 22:58:04.579640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.797 [2024-04-15 22:58:04.579649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:19.797 [2024-04-15 22:58:04.590988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.797 [2024-04-15 22:58:04.591009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.797 [2024-04-15 22:58:04.591018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.797 [2024-04-15 22:58:04.601883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:19.797 [2024-04-15 22:58:04.601904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.797 [2024-04-15 22:58:04.601916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.059 [2024-04-15 22:58:04.614752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.059 [2024-04-15 22:58:04.614773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.059 [2024-04-15 22:58:04.614782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.059 [2024-04-15 22:58:04.625717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.059 [2024-04-15 22:58:04.625739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.059 [2024-04-15 22:58:04.625747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.059 [2024-04-15 22:58:04.636602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.059 [2024-04-15 22:58:04.636623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.059 [2024-04-15 22:58:04.636632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.059 [2024-04-15 22:58:04.648008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.059 [2024-04-15 22:58:04.648029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.059 [2024-04-15 22:58:04.648038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.059 [2024-04-15 22:58:04.657982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.059 [2024-04-15 22:58:04.658004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.059 [2024-04-15 22:58:04.658012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.059 [2024-04-15 22:58:04.668148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.059 [2024-04-15 22:58:04.668169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.059 [2024-04-15 22:58:04.668177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.059 [2024-04-15 22:58:04.677000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.059 [2024-04-15 22:58:04.677020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.059 [2024-04-15 22:58:04.677028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.059 [2024-04-15 22:58:04.687238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.059 [2024-04-15 22:58:04.687259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.059 [2024-04-15 22:58:04.687268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.059 [2024-04-15 22:58:04.697519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.059 [2024-04-15 22:58:04.697549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.059 [2024-04-15 22:58:04.697559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.059 [2024-04-15 22:58:04.709450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.059 [2024-04-15 22:58:04.709470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.059 [2024-04-15 22:58:04.709479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.059 [2024-04-15 22:58:04.720730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.059 [2024-04-15 22:58:04.720750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.059 [2024-04-15 22:58:04.720759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.059 [2024-04-15 22:58:04.732316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.059 [2024-04-15 22:58:04.732337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.059 [2024-04-15 22:58:04.732345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.059 [2024-04-15 22:58:04.744461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.059 [2024-04-15 22:58:04.744481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.060 [2024-04-15 22:58:04.744490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.060 [2024-04-15 22:58:04.753980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.060 [2024-04-15 22:58:04.754001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.060 [2024-04-15 22:58:04.754010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.060 [2024-04-15 22:58:04.765496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.060 [2024-04-15 22:58:04.765517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.060 [2024-04-15 22:58:04.765526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.060 [2024-04-15 22:58:04.777340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.060 [2024-04-15 22:58:04.777361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.060 [2024-04-15 22:58:04.777370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.060 [2024-04-15 22:58:04.789960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.060 [2024-04-15 22:58:04.789981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.060 [2024-04-15 22:58:04.789990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.060 [2024-04-15 22:58:04.801961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.060 [2024-04-15 22:58:04.801983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.060 [2024-04-15 22:58:04.801993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.060 [2024-04-15 22:58:04.815493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.060 [2024-04-15 22:58:04.815514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.060 [2024-04-15 22:58:04.815523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.060 [2024-04-15 22:58:04.825674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.060 [2024-04-15 22:58:04.825695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.060 [2024-04-15 22:58:04.825704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.060 [2024-04-15 22:58:04.836405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.060 [2024-04-15 22:58:04.836427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.060 [2024-04-15 22:58:04.836435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.060 [2024-04-15 22:58:04.846384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.060 [2024-04-15 22:58:04.846405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.060 [2024-04-15 22:58:04.846413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.060 [2024-04-15 22:58:04.856939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.060 [2024-04-15 22:58:04.856960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.060 [2024-04-15 22:58:04.856969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.322 [2024-04-15 22:58:04.869515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.322 [2024-04-15 22:58:04.869536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.322 [2024-04-15 22:58:04.869549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.322 [2024-04-15 22:58:04.879661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.322 [2024-04-15 22:58:04.879682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.322 [2024-04-15 22:58:04.879690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.322 [2024-04-15 22:58:04.892932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.322 [2024-04-15 22:58:04.892954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.322 [2024-04-15 22:58:04.892966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.322 [2024-04-15 22:58:04.905920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.322 [2024-04-15 22:58:04.905941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.322 [2024-04-15 22:58:04.905950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.322 [2024-04-15 22:58:04.918162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.322 [2024-04-15 22:58:04.918183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.322 [2024-04-15 22:58:04.918192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.322 [2024-04-15 22:58:04.929785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.322 [2024-04-15 22:58:04.929805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.322 [2024-04-15 22:58:04.929814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.322 [2024-04-15 22:58:04.940627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.322 [2024-04-15 22:58:04.940649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.322 [2024-04-15 22:58:04.940657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.322 [2024-04-15 22:58:04.953237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.322 [2024-04-15 22:58:04.953258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.322 [2024-04-15 22:58:04.953266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.323 [2024-04-15 22:58:04.964650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.323 [2024-04-15 22:58:04.964671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-04-15 22:58:04.964679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.323 [2024-04-15 22:58:04.976077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.323 [2024-04-15 22:58:04.976098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-04-15 22:58:04.976107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.323 [2024-04-15 22:58:04.987925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.323 [2024-04-15 22:58:04.987946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-04-15 22:58:04.987955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.323 [2024-04-15 22:58:05.000289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.323 [2024-04-15 22:58:05.000310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-04-15 22:58:05.000319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.323 [2024-04-15 22:58:05.013659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.323 [2024-04-15 22:58:05.013680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-04-15 22:58:05.013689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.323 [2024-04-15 22:58:05.030034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.323 [2024-04-15 22:58:05.030055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-04-15 22:58:05.030064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.323 [2024-04-15 22:58:05.038097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.323 [2024-04-15 22:58:05.038118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-04-15 22:58:05.038127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.323 [2024-04-15 22:58:05.046247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.323 [2024-04-15 22:58:05.046268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-04-15 22:58:05.046276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.323 [2024-04-15 22:58:05.053078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.323 [2024-04-15 22:58:05.053100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-04-15 22:58:05.053109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.323 [2024-04-15 22:58:05.059747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.323 [2024-04-15 22:58:05.059768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-04-15 22:58:05.059776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.323 [2024-04-15 22:58:05.065848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.323 [2024-04-15 22:58:05.065869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-04-15 22:58:05.065879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.323 [2024-04-15 22:58:05.072528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.323 [2024-04-15 22:58:05.072555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-04-15 22:58:05.072568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.323 [2024-04-15 22:58:05.082455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.323 [2024-04-15 22:58:05.082476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-04-15 22:58:05.082485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.323 [2024-04-15 22:58:05.093500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.323 [2024-04-15 22:58:05.093522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-04-15 22:58:05.093531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.323 [2024-04-15 22:58:05.105677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.323 [2024-04-15 22:58:05.105699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-04-15 22:58:05.105708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.323 [2024-04-15 22:58:05.116653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.323 [2024-04-15 22:58:05.116674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-04-15 22:58:05.116683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.323 [2024-04-15 22:58:05.126411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.323 [2024-04-15 22:58:05.126433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-04-15 22:58:05.126441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.137982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.138004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.138013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.152368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.152390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.152400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.165804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.165826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.165835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.179845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.179871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.179879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.193216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.193238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.193247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.205779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.205801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.205810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.218419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.218440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.218449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.231209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.231230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.231239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.243411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.243432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.243441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.253637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.253659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.253668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.262523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.262550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.262559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.274333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.274354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.274363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.285475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.285497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.285506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.297665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.297687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.297696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.308755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.308777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.308785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.321002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.321023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.321032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.332645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.332667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.332675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.344587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.344609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.344618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.356911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.356933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.356942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.368496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.368519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.368528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.377283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.377304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.377317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.586 [2024-04-15 22:58:05.384963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.586 [2024-04-15 22:58:05.384985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.586 [2024-04-15 22:58:05.384993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.848 [2024-04-15 22:58:05.396207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.848 [2024-04-15 22:58:05.396229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.848 [2024-04-15 22:58:05.396238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.848 [2024-04-15 22:58:05.407992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.408013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.408022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.419607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.419628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.419637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.431294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.431315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.431324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.440249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.440270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.440279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.452064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.452084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.452093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.464114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.464135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.464144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.475081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.475107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.475116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.487417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.487439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.487448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.497370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.497391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.497400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.509815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.509837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.509845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.521918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.521941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.521949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.533207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.533229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.533238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.544994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.545015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.545024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.557112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.557134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.557143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.567692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.567714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.567722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.577734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.577756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.577764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.586439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.586460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.586469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.596535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.596562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.596571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.607259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.607281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.607289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.617996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.618018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.618026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.629318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.629340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.629349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.640572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.640593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.640602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.849 [2024-04-15 22:58:05.651138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:20.849 [2024-04-15 22:58:05.651160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.849 [2024-04-15 22:58:05.651169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.111 [2024-04-15 22:58:05.662983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.111 [2024-04-15 22:58:05.663006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.111 [2024-04-15 22:58:05.663018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.111 [2024-04-15 22:58:05.674392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.111 [2024-04-15 22:58:05.674413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.111 [2024-04-15 22:58:05.674422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.111 [2024-04-15 22:58:05.684381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.111 [2024-04-15 22:58:05.684403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.111 [2024-04-15 22:58:05.684412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.111 [2024-04-15 22:58:05.694192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.111 [2024-04-15 22:58:05.694214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.111 [2024-04-15 22:58:05.694223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.111 [2024-04-15 22:58:05.705389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.111 [2024-04-15 22:58:05.705410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.111 [2024-04-15 22:58:05.705419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.111 [2024-04-15 22:58:05.717495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.111 [2024-04-15 22:58:05.717516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.111 [2024-04-15 22:58:05.717525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.111 [2024-04-15 22:58:05.727803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.111 [2024-04-15 22:58:05.727825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.111 [2024-04-15 22:58:05.727834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.111 [2024-04-15 22:58:05.739454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.111 [2024-04-15 22:58:05.739475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.111 [2024-04-15 22:58:05.739484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.111 [2024-04-15 22:58:05.748648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.111 [2024-04-15 22:58:05.748669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.111 [2024-04-15 22:58:05.748678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.111 [2024-04-15 22:58:05.757580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.111 [2024-04-15 22:58:05.757602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.111 [2024-04-15 22:58:05.757612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.111 [2024-04-15 22:58:05.765855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.111 [2024-04-15 22:58:05.765877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.111 [2024-04-15 22:58:05.765886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.111 [2024-04-15 22:58:05.775083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.111 [2024-04-15 22:58:05.775105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.111 [2024-04-15 22:58:05.775114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.111 [2024-04-15 22:58:05.783265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.111 [2024-04-15 22:58:05.783287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.111 [2024-04-15 22:58:05.783295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.111 [2024-04-15 22:58:05.795771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.111 [2024-04-15 22:58:05.795792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.111 [2024-04-15 22:58:05.795801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.111 [2024-04-15 22:58:05.808361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.111 [2024-04-15 22:58:05.808382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.111 [2024-04-15 22:58:05.808391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.111 [2024-04-15 22:58:05.819456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.111 [2024-04-15 22:58:05.819478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.111 [2024-04-15 22:58:05.819487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.111 [2024-04-15 22:58:05.831504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.111 [2024-04-15 22:58:05.831525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.111 [2024-04-15 22:58:05.831534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.111 [2024-04-15 22:58:05.843222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.111 [2024-04-15 22:58:05.843244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.111 [2024-04-15 22:58:05.843259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.111 [2024-04-15 22:58:05.852643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.111 [2024-04-15 22:58:05.852665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.112 [2024-04-15 22:58:05.852674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.112 [2024-04-15 22:58:05.860886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.112 [2024-04-15 22:58:05.860907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.112 [2024-04-15 22:58:05.860916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.112 [2024-04-15 22:58:05.868203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.112 [2024-04-15 22:58:05.868224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.112 [2024-04-15 22:58:05.868233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.112 [2024-04-15 22:58:05.877520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.112 [2024-04-15 22:58:05.877541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.112 [2024-04-15 22:58:05.877555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.112 [2024-04-15 22:58:05.889130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.112 [2024-04-15 22:58:05.889152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.112 [2024-04-15 22:58:05.889160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.112 [2024-04-15 22:58:05.901446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.112 [2024-04-15 22:58:05.901469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.112 [2024-04-15 22:58:05.901477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.112 [2024-04-15 22:58:05.913469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.112 [2024-04-15 22:58:05.913490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.112 [2024-04-15 22:58:05.913499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.374 [2024-04-15 22:58:05.923294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.374 [2024-04-15 22:58:05.923316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.374 [2024-04-15 22:58:05.923325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.374 [2024-04-15 22:58:05.932353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.374 [2024-04-15 22:58:05.932379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.374 [2024-04-15 22:58:05.932388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.374 [2024-04-15 22:58:05.944096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.374 [2024-04-15 22:58:05.944118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.374 [2024-04-15 22:58:05.944127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.374 [2024-04-15 22:58:05.956500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.374 [2024-04-15 22:58:05.956523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:05.956532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:05.967843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:05.967865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:05.967874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:05.978906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:05.978928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:05.978937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:05.990916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:05.990938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:05.990946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:06.002357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:06.002380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:06.002389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:06.013724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:06.013746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:06.013754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:06.023431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:06.023453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:06.023462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:06.032265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:06.032287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:06.032295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:06.041052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:06.041074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:06.041082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:06.051987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:06.052008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:06.052018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:06.061296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:06.061319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:06.061328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:06.073347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:06.073368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:06.073377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:06.085488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:06.085511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:06.085519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:06.097612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:06.097633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:06.097642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:06.108733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:06.108754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:06.108763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:06.120558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:06.120579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:06.120592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:06.129899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:06.129921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:06.129929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:06.139181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:06.139202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:06.139211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:06.148125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:06.148147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:06.148156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:06.157303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:06.157324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:06.157333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:06.169302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:06.169324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:06.169332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.375 [2024-04-15 22:58:06.182084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.375 [2024-04-15 22:58:06.182106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.375 [2024-04-15 22:58:06.182115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.638 [2024-04-15 22:58:06.194766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.638 [2024-04-15 22:58:06.194788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.638 [2024-04-15 22:58:06.194797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.638 [2024-04-15 22:58:06.207661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.638 [2024-04-15 22:58:06.207683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.638 [2024-04-15 22:58:06.207692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.638 [2024-04-15 22:58:06.220361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.638 [2024-04-15 22:58:06.220387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.638 [2024-04-15 22:58:06.220396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.638 [2024-04-15 22:58:06.233392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.638 [2024-04-15 22:58:06.233413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.638 [2024-04-15 22:58:06.233422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.638 [2024-04-15 22:58:06.246021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.638 [2024-04-15 22:58:06.246042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.638 [2024-04-15 22:58:06.246051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.638 [2024-04-15 22:58:06.257632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.638 [2024-04-15 22:58:06.257654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.638 [2024-04-15 22:58:06.257662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.638 [2024-04-15 22:58:06.268298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.638 [2024-04-15 22:58:06.268320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.638 [2024-04-15 22:58:06.268330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.638 [2024-04-15 22:58:06.279769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.638 [2024-04-15 22:58:06.279791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.638 [2024-04-15 22:58:06.279800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.638 [2024-04-15 22:58:06.290920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.638 [2024-04-15 22:58:06.290943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.638 [2024-04-15 22:58:06.290951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.638 [2024-04-15 22:58:06.299453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.638 [2024-04-15 22:58:06.299475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.638 [2024-04-15 22:58:06.299484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.638 [2024-04-15 22:58:06.308719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a24d00) 00:31:21.638 [2024-04-15 22:58:06.308741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.638 [2024-04-15 22:58:06.308750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.638 00:31:21.638 Latency(us) 00:31:21.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.638 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:21.638 nvme0n1 : 2.00 2796.31 349.54 0.00 0.00 5717.43 1262.93 15619.41 00:31:21.638 =================================================================================================================== 00:31:21.638 Total : 2796.31 349.54 0.00 0.00 5717.43 1262.93 15619.41 00:31:21.638 0 00:31:21.638 22:58:06 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:21.638 22:58:06 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:21.638 22:58:06 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:21.638 | .driver_specific 00:31:21.638 | .nvme_error 00:31:21.638 | .status_code 00:31:21.638 | .command_transient_transport_error' 00:31:21.638 22:58:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:21.900 22:58:06 -- host/digest.sh@71 -- # (( 180 > 0 )) 00:31:21.900 22:58:06 -- host/digest.sh@73 -- # killprocess 1322654 00:31:21.900 22:58:06 -- common/autotest_common.sh@926 -- # '[' -z 1322654 ']' 00:31:21.900 22:58:06 -- common/autotest_common.sh@930 -- # kill -0 1322654 00:31:21.900 22:58:06 -- common/autotest_common.sh@931 -- # uname 00:31:21.900 22:58:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:21.900 22:58:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1322654 00:31:21.900 22:58:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:21.900 22:58:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:21.900 22:58:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1322654' 00:31:21.900 killing process with pid 1322654 00:31:21.900 22:58:06 -- common/autotest_common.sh@945 -- # kill 1322654 00:31:21.900 Received shutdown signal, test time was about 2.000000 seconds 00:31:21.900 00:31:21.900 Latency(us) 00:31:21.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.900 =================================================================================================================== 00:31:21.900 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:21.900 22:58:06 -- common/autotest_common.sh@950 -- # wait 1322654 00:31:21.900 22:58:06 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:31:21.900 22:58:06 -- host/digest.sh@54 -- # local rw bs qd 00:31:21.900 22:58:06 -- host/digest.sh@56 -- # rw=randwrite 00:31:21.900 22:58:06 -- host/digest.sh@56 -- # bs=4096 00:31:21.900 22:58:06 -- host/digest.sh@56 -- # qd=128 00:31:21.900 22:58:06 -- host/digest.sh@58 -- # bperfpid=1323351 00:31:21.900 22:58:06 -- host/digest.sh@60 -- # waitforlisten 1323351 /var/tmp/bperf.sock 00:31:21.900 22:58:06 -- common/autotest_common.sh@819 -- # '[' -z 1323351 ']' 00:31:21.900 22:58:06 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:31:21.900 22:58:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:21.900 22:58:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:21.900 22:58:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:21.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:21.900 22:58:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:21.900 22:58:06 -- common/autotest_common.sh@10 -- # set +x 00:31:22.162 [2024-04-15 22:58:06.709260] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:31:22.162 [2024-04-15 22:58:06.709313] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1323351 ] 00:31:22.162 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.162 [2024-04-15 22:58:06.773096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.162 [2024-04-15 22:58:06.834503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.737 22:58:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:22.737 22:58:07 -- common/autotest_common.sh@852 -- # return 0 00:31:22.737 22:58:07 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:22.737 22:58:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:22.998 22:58:07 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:22.998 22:58:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.998 22:58:07 -- common/autotest_common.sh@10 -- # set +x 00:31:22.998 22:58:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.998 22:58:07 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:22.998 22:58:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:22.998 nvme0n1 00:31:23.260 22:58:07 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:23.260 22:58:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.260 22:58:07 -- common/autotest_common.sh@10 -- # set +x 00:31:23.260 22:58:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.260 22:58:07 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:23.260 22:58:07 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:23.260 Running I/O for 2 seconds... 00:31:23.260 [2024-04-15 22:58:07.927414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ee190 00:31:23.260 [2024-04-15 22:58:07.928696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.260 [2024-04-15 22:58:07.928728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:23.260 [2024-04-15 22:58:07.939075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e9e10 00:31:23.260 [2024-04-15 22:58:07.939981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.260 [2024-04-15 22:58:07.940003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:23.260 [2024-04-15 22:58:07.952050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ecc78 00:31:23.260 [2024-04-15 22:58:07.953675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.260 [2024-04-15 22:58:07.953697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:23.260 [2024-04-15 22:58:07.963505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f4b08 00:31:23.260 [2024-04-15 22:58:07.965181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.260 [2024-04-15 22:58:07.965201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:23.260 [2024-04-15 22:58:07.974930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e4de8 00:31:23.260 [2024-04-15 22:58:07.976635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.260 [2024-04-15 22:58:07.976655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:23.260 [2024-04-15 22:58:07.986433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f5be8 00:31:23.260 [2024-04-15 22:58:07.988101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.260 [2024-04-15 22:58:07.988122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:23.260 [2024-04-15 22:58:07.996558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f5378 00:31:23.260 [2024-04-15 22:58:07.998172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.260 [2024-04-15 22:58:07.998194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:23.260 [2024-04-15 22:58:08.007770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e5220 00:31:23.260 [2024-04-15 22:58:08.008858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.260 [2024-04-15 22:58:08.008879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:23.260 [2024-04-15 22:58:08.019196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f8a50 00:31:23.260 [2024-04-15 22:58:08.020385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.260 [2024-04-15 22:58:08.020405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:23.260 [2024-04-15 22:58:08.031671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f2510 00:31:23.260 [2024-04-15 22:58:08.033301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.260 [2024-04-15 22:58:08.033322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:23.260 [2024-04-15 22:58:08.043022] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ed0b0 00:31:23.260 [2024-04-15 22:58:08.044656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.260 [2024-04-15 22:58:08.044676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:23.260 [2024-04-15 22:58:08.054359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e3060 00:31:23.260 [2024-04-15 22:58:08.055956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.260 [2024-04-15 22:58:08.055976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:23.260 [2024-04-15 22:58:08.065799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e9168 00:31:23.261 [2024-04-15 22:58:08.067368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.261 [2024-04-15 22:58:08.067388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:23.522 [2024-04-15 22:58:08.077197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e5ec8 00:31:23.522 [2024-04-15 22:58:08.078795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.522 [2024-04-15 22:58:08.078819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:23.522 [2024-04-15 22:58:08.088592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f46d0 00:31:23.522 [2024-04-15 22:58:08.090190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.522 [2024-04-15 22:58:08.090210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:23.522 [2024-04-15 22:58:08.100284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e3060 00:31:23.522 [2024-04-15 22:58:08.101824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.522 [2024-04-15 22:58:08.101845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:23.522 [2024-04-15 22:58:08.111720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f0ff8 00:31:23.522 [2024-04-15 22:58:08.113286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.522 [2024-04-15 22:58:08.113306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:23.522 [2024-04-15 22:58:08.123541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f7100 00:31:23.523 [2024-04-15 22:58:08.124512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.523 [2024-04-15 22:58:08.124531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:23.523 [2024-04-15 22:58:08.133771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190eb328 00:31:23.523 [2024-04-15 22:58:08.134839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.523 [2024-04-15 22:58:08.134859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:23.523 [2024-04-15 22:58:08.145059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190eaab8 00:31:23.523 [2024-04-15 22:58:08.146128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.523 [2024-04-15 22:58:08.146147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:23.523 [2024-04-15 22:58:08.156405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f6cc8 00:31:23.523 [2024-04-15 22:58:08.157520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.523 [2024-04-15 22:58:08.157541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:23.523 [2024-04-15 22:58:08.167793] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f7970 00:31:23.523 [2024-04-15 22:58:08.169220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.523 [2024-04-15 22:58:08.169241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:23.523 [2024-04-15 22:58:08.179183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f7970 00:31:23.523 [2024-04-15 22:58:08.180322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.523 [2024-04-15 22:58:08.180342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:23.523 [2024-04-15 22:58:08.190533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f7970 00:31:23.523 [2024-04-15 22:58:08.191767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.523 [2024-04-15 22:58:08.191787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.523 [2024-04-15 22:58:08.201908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f7970 00:31:23.523 [2024-04-15 22:58:08.203182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.523 [2024-04-15 22:58:08.203202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:23.523 [2024-04-15 22:58:08.212721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f35f0 00:31:23.523 [2024-04-15 22:58:08.213739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.523 [2024-04-15 22:58:08.213759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:23.523 [2024-04-15 22:58:08.224130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f6cc8 00:31:23.523 [2024-04-15 22:58:08.225123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.523 [2024-04-15 22:58:08.225143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:23.523 [2024-04-15 22:58:08.235577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f0bc0 00:31:23.523 [2024-04-15 22:58:08.236643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.523 [2024-04-15 22:58:08.236663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:23.523 [2024-04-15 22:58:08.247004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190eb760 00:31:23.523 [2024-04-15 22:58:08.248085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.523 [2024-04-15 22:58:08.248105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:23.523 [2024-04-15 22:58:08.260437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190eaef0 00:31:23.523 [2024-04-15 22:58:08.262165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.523 [2024-04-15 22:58:08.262185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:23.523 [2024-04-15 22:58:08.272123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e0a68 00:31:23.523 [2024-04-15 22:58:08.273926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.523 [2024-04-15 22:58:08.273946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:23.523 [2024-04-15 22:58:08.282217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e0630 00:31:23.523 [2024-04-15 22:58:08.283877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.523 [2024-04-15 22:58:08.283897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:23.523 [2024-04-15 22:58:08.293407] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e88f8 00:31:23.523 [2024-04-15 22:58:08.295134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.523 [2024-04-15 22:58:08.295154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:23.523 [2024-04-15 22:58:08.304319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f4b08 00:31:23.523 [2024-04-15 22:58:08.304620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.523 [2024-04-15 22:58:08.304640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:23.523 [2024-04-15 22:58:08.317449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e3498 00:31:23.523 [2024-04-15 22:58:08.319045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.523 [2024-04-15 22:58:08.319067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:23.523 [2024-04-15 22:58:08.328877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e6300 00:31:23.523 [2024-04-15 22:58:08.330474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.523 [2024-04-15 22:58:08.330494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:23.785 [2024-04-15 22:58:08.340253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f1ca0 00:31:23.785 [2024-04-15 22:58:08.341889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-04-15 22:58:08.341908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.785 [2024-04-15 22:58:08.351653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ee5c8 00:31:23.785 [2024-04-15 22:58:08.353267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-04-15 22:58:08.353287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:23.785 [2024-04-15 22:58:08.363351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e38d0 00:31:23.785 [2024-04-15 22:58:08.364928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-04-15 22:58:08.364948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:23.785 [2024-04-15 22:58:08.374770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e01f8 00:31:23.785 [2024-04-15 22:58:08.376372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-04-15 22:58:08.376396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.785 [2024-04-15 22:58:08.386201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f81e0 00:31:23.785 [2024-04-15 22:58:08.387857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-04-15 22:58:08.387877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:23.785 [2024-04-15 22:58:08.397593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e38d0 00:31:23.785 [2024-04-15 22:58:08.399258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-04-15 22:58:08.399277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:23.785 [2024-04-15 22:58:08.407837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e8d30 00:31:23.785 [2024-04-15 22:58:08.408747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-04-15 22:58:08.408767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:23.785 [2024-04-15 22:58:08.420631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f0788 00:31:23.785 [2024-04-15 22:58:08.422239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-04-15 22:58:08.422260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:23.785 [2024-04-15 22:58:08.432055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e9e10 00:31:23.785 [2024-04-15 22:58:08.433713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-04-15 22:58:08.433733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:23.785 [2024-04-15 22:58:08.442552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ed4e8 00:31:23.785 [2024-04-15 22:58:08.443750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-04-15 22:58:08.443770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:23.785 [2024-04-15 22:58:08.453811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f1868 00:31:23.785 [2024-04-15 22:58:08.455020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-04-15 22:58:08.455040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:23.785 [2024-04-15 22:58:08.465170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f3e60 00:31:23.785 [2024-04-15 22:58:08.466299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-04-15 22:58:08.466320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:23.785 [2024-04-15 22:58:08.476513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f3e60 00:31:23.785 [2024-04-15 22:58:08.477735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-04-15 22:58:08.477756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.785 [2024-04-15 22:58:08.487911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f3e60 00:31:23.786 [2024-04-15 22:58:08.489197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.786 [2024-04-15 22:58:08.489217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.786 [2024-04-15 22:58:08.498340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e2c28 00:31:23.786 [2024-04-15 22:58:08.499535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.786 [2024-04-15 22:58:08.499560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:23.786 [2024-04-15 22:58:08.509475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ed920 00:31:23.786 [2024-04-15 22:58:08.510363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.786 [2024-04-15 22:58:08.510382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:23.786 [2024-04-15 22:58:08.520910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f3e60 00:31:23.786 [2024-04-15 22:58:08.521825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.786 [2024-04-15 22:58:08.521845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:23.786 [2024-04-15 22:58:08.532982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f1868 00:31:23.786 [2024-04-15 22:58:08.533961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.786 [2024-04-15 22:58:08.533981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:23.786 [2024-04-15 22:58:08.544428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f5be8 00:31:23.786 [2024-04-15 22:58:08.545440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.786 [2024-04-15 22:58:08.545460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:23.786 [2024-04-15 22:58:08.555851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f8e88 00:31:23.786 [2024-04-15 22:58:08.556859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.786 [2024-04-15 22:58:08.556878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:23.786 [2024-04-15 22:58:08.567272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f4298 00:31:23.786 [2024-04-15 22:58:08.568284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.786 [2024-04-15 22:58:08.568304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:23.786 [2024-04-15 22:58:08.578708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ed920 00:31:23.786 [2024-04-15 22:58:08.579741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.786 [2024-04-15 22:58:08.579761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:23.786 [2024-04-15 22:58:08.590141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:23.786 [2024-04-15 22:58:08.591188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.786 [2024-04-15 22:58:08.591208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:24.048 [2024-04-15 22:58:08.601567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:24.048 [2024-04-15 22:58:08.602632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.048 [2024-04-15 22:58:08.602651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:24.048 [2024-04-15 22:58:08.612989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:24.048 [2024-04-15 22:58:08.614068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.048 [2024-04-15 22:58:08.614088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:24.048 [2024-04-15 22:58:08.624400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:24.048 [2024-04-15 22:58:08.625491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.048 [2024-04-15 22:58:08.625511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:24.048 [2024-04-15 22:58:08.635853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:24.048 [2024-04-15 22:58:08.636914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.048 [2024-04-15 22:58:08.636934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:24.048 [2024-04-15 22:58:08.647273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:24.048 [2024-04-15 22:58:08.648376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.048 [2024-04-15 22:58:08.648395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.048 [2024-04-15 22:58:08.658679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:24.048 [2024-04-15 22:58:08.659797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.048 [2024-04-15 22:58:08.659817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.048 [2024-04-15 22:58:08.670123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:24.048 [2024-04-15 22:58:08.671254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.048 [2024-04-15 22:58:08.671285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:24.048 [2024-04-15 22:58:08.681522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:24.048 [2024-04-15 22:58:08.682667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.048 [2024-04-15 22:58:08.682687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:24.048 [2024-04-15 22:58:08.692919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:24.048 [2024-04-15 22:58:08.694076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.048 [2024-04-15 22:58:08.694096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:24.048 [2024-04-15 22:58:08.704335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:24.048 [2024-04-15 22:58:08.705501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.048 [2024-04-15 22:58:08.705520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:24.048 [2024-04-15 22:58:08.715722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:24.048 [2024-04-15 22:58:08.716900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.048 [2024-04-15 22:58:08.716920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:24.048 [2024-04-15 22:58:08.727119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:24.048 [2024-04-15 22:58:08.728305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.048 [2024-04-15 22:58:08.728325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:24.048 [2024-04-15 22:58:08.738562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:24.048 [2024-04-15 22:58:08.739765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.048 [2024-04-15 22:58:08.739785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:24.048 [2024-04-15 22:58:08.749988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:24.048 [2024-04-15 22:58:08.751202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.048 [2024-04-15 22:58:08.751222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:24.048 [2024-04-15 22:58:08.761398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:24.048 [2024-04-15 22:58:08.762638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.048 [2024-04-15 22:58:08.762657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:24.048 [2024-04-15 22:58:08.772806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:24.048 [2024-04-15 22:58:08.774055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.048 [2024-04-15 22:58:08.774075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:24.048 [2024-04-15 22:58:08.784209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:24.048 [2024-04-15 22:58:08.785454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.048 [2024-04-15 22:58:08.785473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:24.048 [2024-04-15 22:58:08.795634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:24.048 [2024-04-15 22:58:08.796904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.048 [2024-04-15 22:58:08.796923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:24.048 [2024-04-15 22:58:08.807040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f0ff8 00:31:24.048 [2024-04-15 22:58:08.808332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.048 [2024-04-15 22:58:08.808352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:24.049 [2024-04-15 22:58:08.818442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f46d0 00:31:24.049 [2024-04-15 22:58:08.819777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.049 [2024-04-15 22:58:08.819797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:24.049 [2024-04-15 22:58:08.829834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f96f8 00:31:24.049 [2024-04-15 22:58:08.831027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.049 [2024-04-15 22:58:08.831047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:24.049 [2024-04-15 22:58:08.841181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e5ec8 00:31:24.049 [2024-04-15 22:58:08.842584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.049 [2024-04-15 22:58:08.842604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:24.049 [2024-04-15 22:58:08.852529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f8e88 00:31:24.049 [2024-04-15 22:58:08.853951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.049 [2024-04-15 22:58:08.853971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:08.863936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f3e60 00:31:24.311 [2024-04-15 22:58:08.865124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:08.865144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:08.875386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f4f40 00:31:24.311 [2024-04-15 22:58:08.876967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:08.876987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:08.886512] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190eee38 00:31:24.311 [2024-04-15 22:58:08.886941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:08.886961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:08.899594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f92c0 00:31:24.311 [2024-04-15 22:58:08.901285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:08.901305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:08.910962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e4140 00:31:24.311 [2024-04-15 22:58:08.912045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:08.912065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:08.921589] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e5220 00:31:24.311 [2024-04-15 22:58:08.923590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:08.923611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:08.933458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ea248 00:31:24.311 [2024-04-15 22:58:08.934958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:08.934977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:08.944899] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190edd58 00:31:24.311 [2024-04-15 22:58:08.946430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:08.946450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:08.956337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e6300 00:31:24.311 [2024-04-15 22:58:08.957847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:08.957867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:08.967731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e5ec8 00:31:24.311 [2024-04-15 22:58:08.969307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:08.969330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:08.979427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f1868 00:31:24.311 [2024-04-15 22:58:08.980908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:08.980928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:08.990864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f1868 00:31:24.311 [2024-04-15 22:58:08.992365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:08.992385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:09.002284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e8d30 00:31:24.311 [2024-04-15 22:58:09.003781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:09.003801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:09.013991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f2d80 00:31:24.311 [2024-04-15 22:58:09.014894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:09.014915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:09.024209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f9f68 00:31:24.311 [2024-04-15 22:58:09.025211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:09.025231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:09.035459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e4140 00:31:24.311 [2024-04-15 22:58:09.036471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:09.036491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:09.046839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f8e88 00:31:24.311 [2024-04-15 22:58:09.047948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:09.047968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:09.058211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e8088 00:31:24.311 [2024-04-15 22:58:09.059576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:09.059597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:09.069578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ea680 00:31:24.311 [2024-04-15 22:58:09.070671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:09.070694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:09.080988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e5a90 00:31:24.311 [2024-04-15 22:58:09.082232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:09.082253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:09.093364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e27f0 00:31:24.311 [2024-04-15 22:58:09.094405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:09.094425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:09.104771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e0630 00:31:24.311 [2024-04-15 22:58:09.105912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:09.105933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.311 [2024-04-15 22:58:09.116103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f8618 00:31:24.311 [2024-04-15 22:58:09.117640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.311 [2024-04-15 22:58:09.117661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.574 [2024-04-15 22:58:09.126235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f8618 00:31:24.574 [2024-04-15 22:58:09.127121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.574 [2024-04-15 22:58:09.127142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:24.574 [2024-04-15 22:58:09.137694] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e6300 00:31:24.574 [2024-04-15 22:58:09.138626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.574 [2024-04-15 22:58:09.138646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:24.574 [2024-04-15 22:58:09.149125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ed4e8 00:31:24.574 [2024-04-15 22:58:09.150086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.574 [2024-04-15 22:58:09.150106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:24.574 [2024-04-15 22:58:09.160521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190eb760 00:31:24.574 [2024-04-15 22:58:09.161493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.574 [2024-04-15 22:58:09.161513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:24.574 [2024-04-15 22:58:09.171968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e23b8 00:31:24.574 [2024-04-15 22:58:09.173018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.575 [2024-04-15 22:58:09.173038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:24.575 [2024-04-15 22:58:09.183408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ea248 00:31:24.575 [2024-04-15 22:58:09.184464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.575 [2024-04-15 22:58:09.184485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:24.575 [2024-04-15 22:58:09.194853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f4f40 00:31:24.575 [2024-04-15 22:58:09.195971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.575 [2024-04-15 22:58:09.195991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:24.575 [2024-04-15 22:58:09.206094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f0ff8 00:31:24.575 [2024-04-15 22:58:09.206973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.575 [2024-04-15 22:58:09.206993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:24.575 [2024-04-15 22:58:09.217508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f0788 00:31:24.575 [2024-04-15 22:58:09.218435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.575 [2024-04-15 22:58:09.218456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:24.575 [2024-04-15 22:58:09.228980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e4578 00:31:24.575 [2024-04-15 22:58:09.229891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.575 [2024-04-15 22:58:09.229911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:24.575 [2024-04-15 22:58:09.240451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f6cc8 00:31:24.575 [2024-04-15 22:58:09.241417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.575 [2024-04-15 22:58:09.241438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:24.575 [2024-04-15 22:58:09.251873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e8d30 00:31:24.575 [2024-04-15 22:58:09.252882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.575 [2024-04-15 22:58:09.252903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:24.575 [2024-04-15 22:58:09.265240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ed4e8 00:31:24.575 [2024-04-15 22:58:09.266886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.575 [2024-04-15 22:58:09.266906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:24.575 [2024-04-15 22:58:09.276563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f2510 00:31:24.575 [2024-04-15 22:58:09.278187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.575 [2024-04-15 22:58:09.278209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:24.575 [2024-04-15 22:58:09.287864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f7da8 00:31:24.575 [2024-04-15 22:58:09.289440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.575 [2024-04-15 22:58:09.289461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:24.575 [2024-04-15 22:58:09.299240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e0a68 00:31:24.575 [2024-04-15 22:58:09.300860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.575 [2024-04-15 22:58:09.300881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:24.575 [2024-04-15 22:58:09.310838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ed0b0 00:31:24.575 [2024-04-15 22:58:09.312400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.575 [2024-04-15 22:58:09.312420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:24.575 [2024-04-15 22:58:09.322282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e3498 00:31:24.575 [2024-04-15 22:58:09.323865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.575 [2024-04-15 22:58:09.323886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:24.575 [2024-04-15 22:58:09.333697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ee5c8 00:31:24.575 [2024-04-15 22:58:09.335283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.575 [2024-04-15 22:58:09.335303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:24.575 [2024-04-15 22:58:09.345401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ea680 00:31:24.575 [2024-04-15 22:58:09.346911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.575 [2024-04-15 22:58:09.346931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:24.575 [2024-04-15 22:58:09.356863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190eb328 00:31:24.575 [2024-04-15 22:58:09.358412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.575 [2024-04-15 22:58:09.358432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:24.575 [2024-04-15 22:58:09.368303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190eaef0 00:31:24.575 [2024-04-15 22:58:09.369898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.575 [2024-04-15 22:58:09.369921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:24.575 [2024-04-15 22:58:09.379707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f46d0 00:31:24.575 [2024-04-15 22:58:09.381312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.575 [2024-04-15 22:58:09.381332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:24.837 [2024-04-15 22:58:09.391384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e1710 00:31:24.837 [2024-04-15 22:58:09.392282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.837 [2024-04-15 22:58:09.392302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:24.837 [2024-04-15 22:58:09.401701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190edd58 00:31:24.837 [2024-04-15 22:58:09.402878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.837 [2024-04-15 22:58:09.402899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:24.837 [2024-04-15 22:58:09.412974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f8a50 00:31:24.837 [2024-04-15 22:58:09.414077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.837 [2024-04-15 22:58:09.414097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:24.837 [2024-04-15 22:58:09.424396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190eb760 00:31:24.837 [2024-04-15 22:58:09.425498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.837 [2024-04-15 22:58:09.425518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:24.837 [2024-04-15 22:58:09.435782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ea248 00:31:24.837 [2024-04-15 22:58:09.436945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.837 [2024-04-15 22:58:09.436964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:24.837 [2024-04-15 22:58:09.447147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ef270 00:31:24.837 [2024-04-15 22:58:09.448225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.837 [2024-04-15 22:58:09.448245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:24.837 [2024-04-15 22:58:09.458525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ef270 00:31:24.837 [2024-04-15 22:58:09.460311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.837 [2024-04-15 22:58:09.460331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:24.837 [2024-04-15 22:58:09.469385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e9168 00:31:24.837 [2024-04-15 22:58:09.470331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.837 [2024-04-15 22:58:09.470351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:24.837 [2024-04-15 22:58:09.480841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ee190 00:31:24.837 [2024-04-15 22:58:09.481859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.837 [2024-04-15 22:58:09.481879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.837 [2024-04-15 22:58:09.492270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e84c0 00:31:24.837 [2024-04-15 22:58:09.493329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.837 [2024-04-15 22:58:09.493348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:24.837 [2024-04-15 22:58:09.503683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190edd58 00:31:24.837 [2024-04-15 22:58:09.504754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.837 [2024-04-15 22:58:09.504774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.837 [2024-04-15 22:58:09.515390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f6458 00:31:24.837 [2024-04-15 22:58:09.515744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.837 [2024-04-15 22:58:09.515764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:24.837 [2024-04-15 22:58:09.528907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f46d0 00:31:24.837 [2024-04-15 22:58:09.530726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.837 [2024-04-15 22:58:09.530746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:24.837 [2024-04-15 22:58:09.539036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e1710 00:31:24.837 [2024-04-15 22:58:09.540720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.837 [2024-04-15 22:58:09.540741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:24.837 [2024-04-15 22:58:09.550258] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f5378 00:31:24.837 [2024-04-15 22:58:09.551992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.837 [2024-04-15 22:58:09.552012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:24.837 [2024-04-15 22:58:09.561376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190eee38 00:31:24.837 [2024-04-15 22:58:09.561851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.837 [2024-04-15 22:58:09.561871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:24.837 [2024-04-15 22:58:09.574142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f8e88 00:31:24.837 [2024-04-15 22:58:09.575768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.838 [2024-04-15 22:58:09.575789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:24.838 [2024-04-15 22:58:09.585400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e99d8 00:31:24.838 [2024-04-15 22:58:09.586860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.838 [2024-04-15 22:58:09.586882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:24.838 [2024-04-15 22:58:09.597314] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e88f8 00:31:24.838 [2024-04-15 22:58:09.598451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.838 [2024-04-15 22:58:09.598472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.838 [2024-04-15 22:58:09.607137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e1710 00:31:24.838 [2024-04-15 22:58:09.608054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.838 [2024-04-15 22:58:09.608075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:24.838 [2024-04-15 22:58:09.618589] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f1ca0 00:31:24.838 [2024-04-15 22:58:09.619531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.838 [2024-04-15 22:58:09.619558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:24.838 [2024-04-15 22:58:09.630010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f8e88 00:31:24.838 [2024-04-15 22:58:09.631035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.838 [2024-04-15 22:58:09.631055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:24.838 [2024-04-15 22:58:09.641411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ef6a8 00:31:24.838 [2024-04-15 22:58:09.642340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.838 [2024-04-15 22:58:09.642360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:25.099 [2024-04-15 22:58:09.652794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f2948 00:31:25.099 [2024-04-15 22:58:09.653925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.099 [2024-04-15 22:58:09.653946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:25.099 [2024-04-15 22:58:09.664310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ef6a8 00:31:25.099 [2024-04-15 22:58:09.665389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.099 [2024-04-15 22:58:09.665413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:25.099 [2024-04-15 22:58:09.676793] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f3a28 00:31:25.099 [2024-04-15 22:58:09.677715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.099 [2024-04-15 22:58:09.677736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:25.099 [2024-04-15 22:58:09.688221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e5a90 00:31:25.099 [2024-04-15 22:58:09.689243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.099 [2024-04-15 22:58:09.689263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.099 [2024-04-15 22:58:09.699679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e3060 00:31:25.099 [2024-04-15 22:58:09.700443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.099 [2024-04-15 22:58:09.700464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:25.099 [2024-04-15 22:58:09.711439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f2510 00:31:25.099 [2024-04-15 22:58:09.712407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.100 [2024-04-15 22:58:09.712428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:25.100 [2024-04-15 22:58:09.722790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e6b70 00:31:25.100 [2024-04-15 22:58:09.723783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.100 [2024-04-15 22:58:09.723804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:25.100 [2024-04-15 22:58:09.734174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e12d8 00:31:25.100 [2024-04-15 22:58:09.735093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.100 [2024-04-15 22:58:09.735114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:25.100 [2024-04-15 22:58:09.745530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f0350 00:31:25.100 [2024-04-15 22:58:09.746321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.100 [2024-04-15 22:58:09.746341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:25.100 [2024-04-15 22:58:09.756178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e7c50 00:31:25.100 [2024-04-15 22:58:09.757651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.100 [2024-04-15 22:58:09.757671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.100 [2024-04-15 22:58:09.767489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190ed920 00:31:25.100 [2024-04-15 22:58:09.768603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.100 [2024-04-15 22:58:09.768624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:25.100 [2024-04-15 22:58:09.778904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f5be8 00:31:25.100 [2024-04-15 22:58:09.780296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.100 [2024-04-15 22:58:09.780317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:25.100 [2024-04-15 22:58:09.790275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f35f0 00:31:25.100 [2024-04-15 22:58:09.792122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.100 [2024-04-15 22:58:09.792142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:25.100 [2024-04-15 22:58:09.801708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e5658 00:31:25.100 [2024-04-15 22:58:09.803253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.100 [2024-04-15 22:58:09.803273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:25.100 [2024-04-15 22:58:09.812242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e9168 00:31:25.100 [2024-04-15 22:58:09.812475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.100 [2024-04-15 22:58:09.812496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:25.100 [2024-04-15 22:58:09.824042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190efae0 00:31:25.100 [2024-04-15 22:58:09.824890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.100 [2024-04-15 22:58:09.824911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:25.100 [2024-04-15 22:58:09.835493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f0350 00:31:25.100 [2024-04-15 22:58:09.836374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.100 [2024-04-15 22:58:09.836394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:25.100 [2024-04-15 22:58:09.846916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f3a28 00:31:25.100 [2024-04-15 22:58:09.847802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.100 [2024-04-15 22:58:09.847824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:25.100 [2024-04-15 22:58:09.858350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e5220 00:31:25.100 [2024-04-15 22:58:09.859246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.100 [2024-04-15 22:58:09.859267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:25.100 [2024-04-15 22:58:09.869770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e5220 00:31:25.100 [2024-04-15 22:58:09.870683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.100 [2024-04-15 22:58:09.870703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:25.100 [2024-04-15 22:58:09.881175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f3a28 00:31:25.100 [2024-04-15 22:58:09.882100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.100 [2024-04-15 22:58:09.882120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:25.100 [2024-04-15 22:58:09.892602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190f0350 00:31:25.100 [2024-04-15 22:58:09.893523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.100 [2024-04-15 22:58:09.893547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.100 [2024-04-15 22:58:09.904014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190efae0 00:31:25.100 [2024-04-15 22:58:09.904929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.100 [2024-04-15 22:58:09.904949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:25.361 [2024-04-15 22:58:09.915401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eebe30) with pdu=0x2000190e3060 00:31:25.361 [2024-04-15 22:58:09.915995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.361 [2024-04-15 22:58:09.916014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:25.361 00:31:25.361 Latency(us) 00:31:25.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.361 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:25.361 nvme0n1 : 2.00 22274.01 87.01 0.00 0.00 5739.22 2880.85 15073.28 00:31:25.361 =================================================================================================================== 00:31:25.361 Total : 22274.01 87.01 0.00 0.00 5739.22 2880.85 15073.28 00:31:25.361 0 00:31:25.361 22:58:09 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:25.361 22:58:09 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:25.361 22:58:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:25.361 22:58:09 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:25.361 | .driver_specific 00:31:25.361 | .nvme_error 00:31:25.361 | .status_code 00:31:25.361 | .command_transient_transport_error' 00:31:25.361 22:58:10 -- host/digest.sh@71 -- # (( 175 > 0 )) 00:31:25.361 22:58:10 -- host/digest.sh@73 -- # killprocess 1323351 00:31:25.361 22:58:10 -- common/autotest_common.sh@926 -- # '[' -z 1323351 ']' 00:31:25.361 22:58:10 -- common/autotest_common.sh@930 -- # kill -0 1323351 00:31:25.361 22:58:10 -- common/autotest_common.sh@931 -- # uname 00:31:25.361 22:58:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:25.361 22:58:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1323351 00:31:25.361 22:58:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:25.361 22:58:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:25.361 22:58:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1323351' 00:31:25.361 killing process with pid 1323351 00:31:25.361 22:58:10 -- common/autotest_common.sh@945 -- # kill 1323351 00:31:25.361 Received shutdown signal, test time was about 2.000000 seconds 00:31:25.361 00:31:25.361 Latency(us) 00:31:25.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.361 =================================================================================================================== 00:31:25.361 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:25.361 22:58:10 -- common/autotest_common.sh@950 -- # wait 1323351 00:31:25.622 22:58:10 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:31:25.622 22:58:10 -- host/digest.sh@54 -- # local rw bs qd 00:31:25.622 22:58:10 -- host/digest.sh@56 -- # rw=randwrite 00:31:25.622 22:58:10 -- host/digest.sh@56 -- # bs=131072 00:31:25.622 22:58:10 -- host/digest.sh@56 -- # qd=16 00:31:25.622 22:58:10 -- host/digest.sh@58 -- # bperfpid=1324041 00:31:25.622 22:58:10 -- host/digest.sh@60 -- # waitforlisten 1324041 /var/tmp/bperf.sock 00:31:25.622 22:58:10 -- common/autotest_common.sh@819 -- # '[' -z 1324041 ']' 00:31:25.622 22:58:10 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:31:25.622 22:58:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:25.622 22:58:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:25.622 22:58:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:25.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:25.622 22:58:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:25.622 22:58:10 -- common/autotest_common.sh@10 -- # set +x 00:31:25.622 [2024-04-15 22:58:10.330149] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:31:25.622 [2024-04-15 22:58:10.330203] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1324041 ] 00:31:25.622 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:25.622 Zero copy mechanism will not be used. 00:31:25.622 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.622 [2024-04-15 22:58:10.395857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.882 [2024-04-15 22:58:10.457151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:26.453 22:58:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:26.453 22:58:11 -- common/autotest_common.sh@852 -- # return 0 00:31:26.453 22:58:11 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:26.453 22:58:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:26.453 22:58:11 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:26.453 22:58:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.453 22:58:11 -- common/autotest_common.sh@10 -- # set +x 00:31:26.453 22:58:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.453 22:58:11 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:26.453 22:58:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:27.025 nvme0n1 00:31:27.025 22:58:11 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:27.025 22:58:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.025 22:58:11 -- common/autotest_common.sh@10 -- # set +x 00:31:27.025 22:58:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.025 22:58:11 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:27.025 22:58:11 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:27.025 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:27.025 Zero copy mechanism will not be used. 00:31:27.025 Running I/O for 2 seconds... 00:31:27.025 [2024-04-15 22:58:11.746598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.025 [2024-04-15 22:58:11.746842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.025 [2024-04-15 22:58:11.746876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.025 [2024-04-15 22:58:11.757832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.025 [2024-04-15 22:58:11.758078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.025 [2024-04-15 22:58:11.758111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.025 [2024-04-15 22:58:11.768973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.025 [2024-04-15 22:58:11.769199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.025 [2024-04-15 22:58:11.769219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.025 [2024-04-15 22:58:11.780388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.025 [2024-04-15 22:58:11.780748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.025 [2024-04-15 22:58:11.780769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.025 [2024-04-15 22:58:11.790933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.025 [2024-04-15 22:58:11.791153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.025 [2024-04-15 22:58:11.791173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.025 [2024-04-15 22:58:11.801670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.025 [2024-04-15 22:58:11.801804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.025 [2024-04-15 22:58:11.801824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.025 [2024-04-15 22:58:11.812679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.025 [2024-04-15 22:58:11.813083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.025 [2024-04-15 22:58:11.813103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.025 [2024-04-15 22:58:11.823688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.025 [2024-04-15 22:58:11.824035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.025 [2024-04-15 22:58:11.824054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.834974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.835263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.835284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.845670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.846018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.846039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.855648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.856025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.856045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.866463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.866815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.866836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.876939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.877071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.877090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.886900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.887053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.887072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.895946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.896142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.896161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.903204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.903525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.903551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.911106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.911356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.911377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.919819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.920011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.920030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.927668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.927922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.927942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.937592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.937759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.937778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.946506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.946847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.946868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.956673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.956801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.956820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.965371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.965603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.965622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.970480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.970661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.970680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.976633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.976829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.976848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.983364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.983658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.983682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.990446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.990576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.990595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:11.997506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:11.997600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:11.997619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:12.001373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:12.001468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:12.001487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:12.007965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:12.008035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:12.008054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.288 [2024-04-15 22:58:12.012072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.288 [2024-04-15 22:58:12.012232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.288 [2024-04-15 22:58:12.012250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.289 [2024-04-15 22:58:12.016317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.289 [2024-04-15 22:58:12.016498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.289 [2024-04-15 22:58:12.016517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.289 [2024-04-15 22:58:12.020610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.289 [2024-04-15 22:58:12.020777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.289 [2024-04-15 22:58:12.020796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.289 [2024-04-15 22:58:12.025121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.289 [2024-04-15 22:58:12.025349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.289 [2024-04-15 22:58:12.025369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.289 [2024-04-15 22:58:12.030147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.289 [2024-04-15 22:58:12.030242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.289 [2024-04-15 22:58:12.030261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.289 [2024-04-15 22:58:12.034592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.289 [2024-04-15 22:58:12.034718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.289 [2024-04-15 22:58:12.034737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.289 [2024-04-15 22:58:12.040971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.289 [2024-04-15 22:58:12.041159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.289 [2024-04-15 22:58:12.041178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.289 [2024-04-15 22:58:12.046903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.289 [2024-04-15 22:58:12.047023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.289 [2024-04-15 22:58:12.047042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.289 [2024-04-15 22:58:12.052358] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.289 [2024-04-15 22:58:12.052466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.289 [2024-04-15 22:58:12.052485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.289 [2024-04-15 22:58:12.056185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.289 [2024-04-15 22:58:12.056324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.289 [2024-04-15 22:58:12.056343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.289 [2024-04-15 22:58:12.061894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.289 [2024-04-15 22:58:12.062117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.289 [2024-04-15 22:58:12.062136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.289 [2024-04-15 22:58:12.071118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.289 [2024-04-15 22:58:12.071443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.289 [2024-04-15 22:58:12.071464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.289 [2024-04-15 22:58:12.081625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.289 [2024-04-15 22:58:12.081882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.289 [2024-04-15 22:58:12.081902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.289 [2024-04-15 22:58:12.090795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.289 [2024-04-15 22:58:12.090895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.289 [2024-04-15 22:58:12.090914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.550 [2024-04-15 22:58:12.096727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.550 [2024-04-15 22:58:12.096808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.550 [2024-04-15 22:58:12.096826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.550 [2024-04-15 22:58:12.100387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.550 [2024-04-15 22:58:12.100488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.550 [2024-04-15 22:58:12.100507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.550 [2024-04-15 22:58:12.104482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.550 [2024-04-15 22:58:12.104655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.550 [2024-04-15 22:58:12.104674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.550 [2024-04-15 22:58:12.108485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.550 [2024-04-15 22:58:12.108812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.550 [2024-04-15 22:58:12.108832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.550 [2024-04-15 22:58:12.113164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.550 [2024-04-15 22:58:12.113362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.550 [2024-04-15 22:58:12.113381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.550 [2024-04-15 22:58:12.117632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.550 [2024-04-15 22:58:12.117783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.550 [2024-04-15 22:58:12.117802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.550 [2024-04-15 22:58:12.122880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.550 [2024-04-15 22:58:12.123030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.550 [2024-04-15 22:58:12.123049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.129723] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.129817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.129841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.135939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.136281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.136301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.143935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.144002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.144020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.151887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.152072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.152091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.158383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.158690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.158710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.165072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.165213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.165232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.172632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.172785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.172803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.177998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.178111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.178129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.182382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.182513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.182532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.189384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.189482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.189501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.199197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.199320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.199339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.208783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.209026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.209047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.218502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.218644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.218663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.227309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.227619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.227639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.236381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.236461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.236480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.245177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.245317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.245335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.254620] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.254746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.254765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.263847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.263989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.264008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.273079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.273193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.273212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.281269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.281402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.281422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.289806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.290064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.290085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.299522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.299881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.299901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.309847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.309958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.309977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.318049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.318130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.318148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.327172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.327420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.327441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.336340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.336468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.336488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.345443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.551 [2024-04-15 22:58:12.345562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.551 [2024-04-15 22:58:12.345584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.551 [2024-04-15 22:58:12.355025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.552 [2024-04-15 22:58:12.355295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.552 [2024-04-15 22:58:12.355316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.813 [2024-04-15 22:58:12.365205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.813 [2024-04-15 22:58:12.365359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.813 [2024-04-15 22:58:12.365378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.813 [2024-04-15 22:58:12.373942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.813 [2024-04-15 22:58:12.374186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.813 [2024-04-15 22:58:12.374206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.813 [2024-04-15 22:58:12.383222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.813 [2024-04-15 22:58:12.383512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.813 [2024-04-15 22:58:12.383532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.813 [2024-04-15 22:58:12.391629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.813 [2024-04-15 22:58:12.391902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.813 [2024-04-15 22:58:12.391921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.813 [2024-04-15 22:58:12.400954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.813 [2024-04-15 22:58:12.401230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.813 [2024-04-15 22:58:12.401250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.813 [2024-04-15 22:58:12.409411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.813 [2024-04-15 22:58:12.409521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.813 [2024-04-15 22:58:12.409540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.813 [2024-04-15 22:58:12.417856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.813 [2024-04-15 22:58:12.417970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.813 [2024-04-15 22:58:12.417989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.813 [2024-04-15 22:58:12.425009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.813 [2024-04-15 22:58:12.425136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.813 [2024-04-15 22:58:12.425155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.813 [2024-04-15 22:58:12.434415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.813 [2024-04-15 22:58:12.434621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.813 [2024-04-15 22:58:12.434640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.813 [2024-04-15 22:58:12.442974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.813 [2024-04-15 22:58:12.443104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.813 [2024-04-15 22:58:12.443122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.813 [2024-04-15 22:58:12.448465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.813 [2024-04-15 22:58:12.448608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.813 [2024-04-15 22:58:12.448627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.813 [2024-04-15 22:58:12.456102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.813 [2024-04-15 22:58:12.456356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.813 [2024-04-15 22:58:12.456376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.813 [2024-04-15 22:58:12.463191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.463299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.463318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.471250] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.471396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.471415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.477430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.477551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.477570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.481651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.481815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.481834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.485977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.486130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.486149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.489485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.489636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.489655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.493548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.493705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.493723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.497987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.498149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.498167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.504967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.505145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.505164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.513579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.513853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.513873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.524059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.524132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.524150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.532464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.532626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.532645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.540130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.540513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.540537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.548615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.548721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.548740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.558259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.558351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.558369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.565952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.566060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.566080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.573632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.573879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.573900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.582049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.582138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.582157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.588701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.588812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.588831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.594740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.594860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.594879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.598748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.598896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.598915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.602665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.602817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.602836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.606333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.606439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.606458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.609792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.609898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.609918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.613679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.613794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.613814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.617086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.617191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.617210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.814 [2024-04-15 22:58:12.621055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:27.814 [2024-04-15 22:58:12.621208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.814 [2024-04-15 22:58:12.621228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.076 [2024-04-15 22:58:12.626502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.076 [2024-04-15 22:58:12.626679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.076 [2024-04-15 22:58:12.626698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.076 [2024-04-15 22:58:12.631313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.076 [2024-04-15 22:58:12.631489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.076 [2024-04-15 22:58:12.631507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.076 [2024-04-15 22:58:12.636524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.076 [2024-04-15 22:58:12.636714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.076 [2024-04-15 22:58:12.636734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.076 [2024-04-15 22:58:12.641304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.076 [2024-04-15 22:58:12.641467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.076 [2024-04-15 22:58:12.641487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.076 [2024-04-15 22:58:12.649368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.076 [2024-04-15 22:58:12.649659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.076 [2024-04-15 22:58:12.649678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.076 [2024-04-15 22:58:12.655056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.076 [2024-04-15 22:58:12.655161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.076 [2024-04-15 22:58:12.655180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.661326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.661667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.661688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.668298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.668374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.668393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.674226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.674348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.674366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.677754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.677906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.677925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.681548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.681681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.681699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.685171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.685276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.685298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.688979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.689175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.689194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.693172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.693294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.693312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.699231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.699352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.699371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.707107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.707418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.707439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.715127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.715254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.715272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.721384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.721482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.721501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.726008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.726220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.726238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.729905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.730178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.730197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.737221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.737513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.737533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.744847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.744918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.744936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.752018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.752145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.752163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.757478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.757558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.757577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.764017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.764204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.764223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.768643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.768774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.768792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.773111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.773259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.773278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.776937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.777029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.777047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.781565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.781665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.781684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.789955] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.790072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.790090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.794952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.795098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.795116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.799595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.799693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.799711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.803419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.077 [2024-04-15 22:58:12.803605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.077 [2024-04-15 22:58:12.803624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.077 [2024-04-15 22:58:12.807701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.078 [2024-04-15 22:58:12.807829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.078 [2024-04-15 22:58:12.807848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.078 [2024-04-15 22:58:12.813682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.078 [2024-04-15 22:58:12.813803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.078 [2024-04-15 22:58:12.813822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.078 [2024-04-15 22:58:12.818568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.078 [2024-04-15 22:58:12.818710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.078 [2024-04-15 22:58:12.818728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.078 [2024-04-15 22:58:12.822413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.078 [2024-04-15 22:58:12.822526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.078 [2024-04-15 22:58:12.822550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.078 [2024-04-15 22:58:12.826328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.078 [2024-04-15 22:58:12.826394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.078 [2024-04-15 22:58:12.826416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.078 [2024-04-15 22:58:12.829756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.078 [2024-04-15 22:58:12.829877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.078 [2024-04-15 22:58:12.829896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.078 [2024-04-15 22:58:12.833879] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.078 [2024-04-15 22:58:12.834038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.078 [2024-04-15 22:58:12.834057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.078 [2024-04-15 22:58:12.837819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.078 [2024-04-15 22:58:12.837963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.078 [2024-04-15 22:58:12.837982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.078 [2024-04-15 22:58:12.841224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.078 [2024-04-15 22:58:12.841375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.078 [2024-04-15 22:58:12.841395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.078 [2024-04-15 22:58:12.845485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.078 [2024-04-15 22:58:12.845675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.078 [2024-04-15 22:58:12.845694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.078 [2024-04-15 22:58:12.850663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.078 [2024-04-15 22:58:12.850800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.078 [2024-04-15 22:58:12.850819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.078 [2024-04-15 22:58:12.855319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.078 [2024-04-15 22:58:12.855437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.078 [2024-04-15 22:58:12.855456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.078 [2024-04-15 22:58:12.860507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.078 [2024-04-15 22:58:12.860573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.078 [2024-04-15 22:58:12.860591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.078 [2024-04-15 22:58:12.869351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.078 [2024-04-15 22:58:12.869511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.078 [2024-04-15 22:58:12.869529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.078 [2024-04-15 22:58:12.877633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.078 [2024-04-15 22:58:12.877700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.078 [2024-04-15 22:58:12.877718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.078 [2024-04-15 22:58:12.882940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.078 [2024-04-15 22:58:12.883047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.078 [2024-04-15 22:58:12.883065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.340 [2024-04-15 22:58:12.888446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.340 [2024-04-15 22:58:12.888511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.340 [2024-04-15 22:58:12.888530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.340 [2024-04-15 22:58:12.893998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.340 [2024-04-15 22:58:12.894112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.340 [2024-04-15 22:58:12.894131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.340 [2024-04-15 22:58:12.900964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.340 [2024-04-15 22:58:12.901039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.340 [2024-04-15 22:58:12.901057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.340 [2024-04-15 22:58:12.909940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.340 [2024-04-15 22:58:12.910039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.340 [2024-04-15 22:58:12.910057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.340 [2024-04-15 22:58:12.918174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.340 [2024-04-15 22:58:12.918442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.340 [2024-04-15 22:58:12.918462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.340 [2024-04-15 22:58:12.925973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.340 [2024-04-15 22:58:12.926255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.340 [2024-04-15 22:58:12.926276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.340 [2024-04-15 22:58:12.933188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.340 [2024-04-15 22:58:12.933298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.340 [2024-04-15 22:58:12.933316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.340 [2024-04-15 22:58:12.938751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.340 [2024-04-15 22:58:12.938887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.340 [2024-04-15 22:58:12.938906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.340 [2024-04-15 22:58:12.945271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.340 [2024-04-15 22:58:12.945370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.340 [2024-04-15 22:58:12.945389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.340 [2024-04-15 22:58:12.953616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.340 [2024-04-15 22:58:12.953807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.340 [2024-04-15 22:58:12.953825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.340 [2024-04-15 22:58:12.962632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.340 [2024-04-15 22:58:12.962703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.340 [2024-04-15 22:58:12.962721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.340 [2024-04-15 22:58:12.972111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.340 [2024-04-15 22:58:12.972326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.340 [2024-04-15 22:58:12.972344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.340 [2024-04-15 22:58:12.979276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.340 [2024-04-15 22:58:12.979358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.340 [2024-04-15 22:58:12.979376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.340 [2024-04-15 22:58:12.987032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.340 [2024-04-15 22:58:12.987110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.340 [2024-04-15 22:58:12.987129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.340 [2024-04-15 22:58:12.996935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.340 [2024-04-15 22:58:12.997116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.340 [2024-04-15 22:58:12.997138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.340 [2024-04-15 22:58:13.003905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.340 [2024-04-15 22:58:13.004044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.340 [2024-04-15 22:58:13.004063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.340 [2024-04-15 22:58:13.008870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.340 [2024-04-15 22:58:13.009039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.340 [2024-04-15 22:58:13.009058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.340 [2024-04-15 22:58:13.012701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.340 [2024-04-15 22:58:13.012885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.340 [2024-04-15 22:58:13.012904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.341 [2024-04-15 22:58:13.017470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.341 [2024-04-15 22:58:13.017558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.341 [2024-04-15 22:58:13.017576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.341 [2024-04-15 22:58:13.022614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.341 [2024-04-15 22:58:13.022901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.341 [2024-04-15 22:58:13.022921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.341 [2024-04-15 22:58:13.030003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.341 [2024-04-15 22:58:13.030125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.341 [2024-04-15 22:58:13.030144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.341 [2024-04-15 22:58:13.033951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.341 [2024-04-15 22:58:13.034044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.341 [2024-04-15 22:58:13.034062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.341 [2024-04-15 22:58:13.038357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.341 [2024-04-15 22:58:13.038459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.341 [2024-04-15 22:58:13.038478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.341 [2024-04-15 22:58:13.042516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.341 [2024-04-15 22:58:13.042659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.341 [2024-04-15 22:58:13.042678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.341 [2024-04-15 22:58:13.046190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.341 [2024-04-15 22:58:13.046365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.341 [2024-04-15 22:58:13.046384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.341 [2024-04-15 22:58:13.049763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.341 [2024-04-15 22:58:13.049915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.341 [2024-04-15 22:58:13.049934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.341 [2024-04-15 22:58:13.053953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.341 [2024-04-15 22:58:13.054062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.341 [2024-04-15 22:58:13.054081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.341 [2024-04-15 22:58:13.060157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.341 [2024-04-15 22:58:13.060254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.341 [2024-04-15 22:58:13.060273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.341 [2024-04-15 22:58:13.066072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.341 [2024-04-15 22:58:13.066204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.341 [2024-04-15 22:58:13.066223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.341 [2024-04-15 22:58:13.071846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.341 [2024-04-15 22:58:13.072035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.341 [2024-04-15 22:58:13.072054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.341 [2024-04-15 22:58:13.080117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.341 [2024-04-15 22:58:13.080467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.341 [2024-04-15 22:58:13.080487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.341 [2024-04-15 22:58:13.088712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.341 [2024-04-15 22:58:13.088964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.341 [2024-04-15 22:58:13.088984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.341 [2024-04-15 22:58:13.099357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.341 [2024-04-15 22:58:13.099732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.341 [2024-04-15 22:58:13.099752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.341 [2024-04-15 22:58:13.108066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.341 [2024-04-15 22:58:13.108320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.341 [2024-04-15 22:58:13.108340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.341 [2024-04-15 22:58:13.118671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.341 [2024-04-15 22:58:13.118750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.341 [2024-04-15 22:58:13.118769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.341 [2024-04-15 22:58:13.129414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.341 [2024-04-15 22:58:13.129805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.341 [2024-04-15 22:58:13.129826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.341 [2024-04-15 22:58:13.140287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.341 [2024-04-15 22:58:13.140422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.341 [2024-04-15 22:58:13.140441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.150664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.150866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.150886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.161704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.162064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.162083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.172632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.172768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.172788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.183675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.183863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.183885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.193966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.194162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.194181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.202780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.203023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.203042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.213035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.213252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.213271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.219646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.219731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.219750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.227295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.227463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.227482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.235103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.235230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.235249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.241226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.241366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.241384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.244749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.244891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.244910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.248541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.248866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.248886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.254234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.254334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.254353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.258600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.258735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.258755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.262506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.262794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.262813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.266675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.266778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.266797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.270503] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.270660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.270680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.277798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.278103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.278124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.285134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.285420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.604 [2024-04-15 22:58:13.285440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.604 [2024-04-15 22:58:13.291448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.604 [2024-04-15 22:58:13.291550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.291569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.295071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.295192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.295211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.298529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.298663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.298682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.302305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.302434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.302453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.306869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.307051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.307070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.312079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.312219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.312237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.317359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.317534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.317560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.322326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.322511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.322530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.327345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.327515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.327534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.331719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.331855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.331878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.336333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.336480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.336499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.340719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.340999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.341019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.347510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.347609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.347628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.351265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.351369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.351388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.354867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.354969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.354988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.360436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.360668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.360687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.364197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.364317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.364336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.368291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.368445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.368464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.371940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.372078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.372097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.375299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.375422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.375441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.379707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.379796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.379814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.386687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.386778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.386797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.390876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.390970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.390988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.394458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.394596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.394614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.399676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.399811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.399830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.403977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.404095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.404113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.605 [2024-04-15 22:58:13.408590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.605 [2024-04-15 22:58:13.408761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.605 [2024-04-15 22:58:13.408780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.412639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.412755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.412775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.420747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.420968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.420987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.427100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.427308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.427327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.435596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.435712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.435731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.439691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.439825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.439843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.443195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.443339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.443358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.446937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.447061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.447080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.450536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.450674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.450692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.454992] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.455107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.455129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.463154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.463389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.463408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.470922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.471124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.471143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.477106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.477290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.477309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.484610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.484992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.485011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.492266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.492401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.492420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.496456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.496580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.496600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.500036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.500129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.500147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.503800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.503895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.503913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.507253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.507375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.507394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.511001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.511201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.511220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.515892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.516097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.516116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.520660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.520850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.520869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.525627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.525771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.525789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.530673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.530845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.530864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.535034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.535118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.535137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.868 [2024-04-15 22:58:13.542097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.868 [2024-04-15 22:58:13.542265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.868 [2024-04-15 22:58:13.542283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.546458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.546595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.546614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.549915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.550051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.550069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.553748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.553901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.553920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.557563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.557692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.557710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.561778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.561908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.561927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.566246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.566380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.566399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.570832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.570902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.570921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.575197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.575320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.575338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.579396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.579515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.579533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.586706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.586891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.586912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.591635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.591829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.591847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.595959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.596060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.596078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.599737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.599865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.599883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.603420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.603517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.603535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.607413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.607566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.607585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.611848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.611973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.611991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.619990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.620384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.620404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.625835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.626012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.626030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.631365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.631600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.631619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.635740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.635838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.635857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.639155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.639276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.639296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.643044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.643169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.643189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.647642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.647835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.647853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.651867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.651954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.651973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.656108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.656341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.656359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.660746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.661015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.661036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.668419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.869 [2024-04-15 22:58:13.668589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.869 [2024-04-15 22:58:13.668608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.869 [2024-04-15 22:58:13.672734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:28.870 [2024-04-15 22:58:13.672864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.870 [2024-04-15 22:58:13.672882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.130 [2024-04-15 22:58:13.677194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:29.130 [2024-04-15 22:58:13.677313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.131 [2024-04-15 22:58:13.677331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.131 [2024-04-15 22:58:13.685530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:29.131 [2024-04-15 22:58:13.685795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.131 [2024-04-15 22:58:13.685815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.131 [2024-04-15 22:58:13.693076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:29.131 [2024-04-15 22:58:13.693256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.131 [2024-04-15 22:58:13.693275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.131 [2024-04-15 22:58:13.702061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:29.131 [2024-04-15 22:58:13.702170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.131 [2024-04-15 22:58:13.702188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.131 [2024-04-15 22:58:13.706647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:29.131 [2024-04-15 22:58:13.706764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.131 [2024-04-15 22:58:13.706782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.131 [2024-04-15 22:58:13.711344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:29.131 [2024-04-15 22:58:13.711461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.131 [2024-04-15 22:58:13.711479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.131 [2024-04-15 22:58:13.717235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:29.131 [2024-04-15 22:58:13.717358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.131 [2024-04-15 22:58:13.717376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.131 [2024-04-15 22:58:13.722333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:29.131 [2024-04-15 22:58:13.722485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.131 [2024-04-15 22:58:13.722507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.131 [2024-04-15 22:58:13.727138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:29.131 [2024-04-15 22:58:13.727256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.131 [2024-04-15 22:58:13.727274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.131 [2024-04-15 22:58:13.732412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eec170) with pdu=0x2000190fef90 00:31:29.131 [2024-04-15 22:58:13.732501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.131 [2024-04-15 22:58:13.732518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.131 00:31:29.131 Latency(us) 00:31:29.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.131 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:29.131 nvme0n1 : 2.00 4872.75 609.09 0.00 0.00 3277.02 1570.13 13489.49 00:31:29.131 =================================================================================================================== 00:31:29.131 Total : 4872.75 609.09 0.00 0.00 3277.02 1570.13 13489.49 00:31:29.131 0 00:31:29.131 22:58:13 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:29.131 22:58:13 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:29.131 22:58:13 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:29.131 | .driver_specific 00:31:29.131 | .nvme_error 00:31:29.131 | .status_code 00:31:29.131 | .command_transient_transport_error' 00:31:29.131 22:58:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:29.131 22:58:13 -- host/digest.sh@71 -- # (( 314 > 0 )) 00:31:29.131 22:58:13 -- host/digest.sh@73 -- # killprocess 1324041 00:31:29.131 22:58:13 -- common/autotest_common.sh@926 -- # '[' -z 1324041 ']' 00:31:29.131 22:58:13 -- common/autotest_common.sh@930 -- # kill -0 1324041 00:31:29.131 22:58:13 -- common/autotest_common.sh@931 -- # uname 00:31:29.131 22:58:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:29.131 22:58:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1324041 00:31:29.391 22:58:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:29.391 22:58:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:29.392 22:58:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1324041' 00:31:29.392 killing process with pid 1324041 00:31:29.392 22:58:13 -- common/autotest_common.sh@945 -- # kill 1324041 00:31:29.392 Received shutdown signal, test time was about 2.000000 seconds 00:31:29.392 00:31:29.392 Latency(us) 00:31:29.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.392 =================================================================================================================== 00:31:29.392 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:29.392 22:58:13 -- common/autotest_common.sh@950 -- # wait 1324041 00:31:29.392 22:58:14 -- host/digest.sh@115 -- # killprocess 1321614 00:31:29.392 22:58:14 -- common/autotest_common.sh@926 -- # '[' -z 1321614 ']' 00:31:29.392 22:58:14 -- common/autotest_common.sh@930 -- # kill -0 1321614 00:31:29.392 22:58:14 -- common/autotest_common.sh@931 -- # uname 00:31:29.392 22:58:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:29.392 22:58:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1321614 00:31:29.392 22:58:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:29.392 22:58:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:29.392 22:58:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1321614' 00:31:29.392 killing process with pid 1321614 00:31:29.392 22:58:14 -- common/autotest_common.sh@945 -- # kill 1321614 00:31:29.392 22:58:14 -- common/autotest_common.sh@950 -- # wait 1321614 00:31:29.655 00:31:29.655 real 0m16.239s 00:31:29.655 user 0m31.575s 00:31:29.655 sys 0m3.393s 00:31:29.655 22:58:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:29.655 22:58:14 -- common/autotest_common.sh@10 -- # set +x 00:31:29.655 ************************************ 00:31:29.655 END TEST nvmf_digest_error 00:31:29.655 ************************************ 00:31:29.655 22:58:14 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:31:29.655 22:58:14 -- host/digest.sh@139 -- # nvmftestfini 00:31:29.655 22:58:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:29.655 22:58:14 -- nvmf/common.sh@116 -- # sync 00:31:29.655 22:58:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:29.655 22:58:14 -- nvmf/common.sh@119 -- # set +e 00:31:29.655 22:58:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:29.655 22:58:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:29.655 rmmod nvme_tcp 00:31:29.655 rmmod nvme_fabrics 00:31:29.655 rmmod nvme_keyring 00:31:29.655 22:58:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:29.655 22:58:14 -- nvmf/common.sh@123 -- # set -e 00:31:29.655 22:58:14 -- nvmf/common.sh@124 -- # return 0 00:31:29.655 22:58:14 -- nvmf/common.sh@477 -- # '[' -n 1321614 ']' 00:31:29.655 22:58:14 -- nvmf/common.sh@478 -- # killprocess 1321614 00:31:29.655 22:58:14 -- common/autotest_common.sh@926 -- # '[' -z 1321614 ']' 00:31:29.655 22:58:14 -- common/autotest_common.sh@930 -- # kill -0 1321614 00:31:29.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1321614) - No such process 00:31:29.655 22:58:14 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1321614 is not found' 00:31:29.655 Process with pid 1321614 is not found 00:31:29.655 22:58:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:29.655 22:58:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:29.655 22:58:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:29.655 22:58:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:29.655 22:58:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:29.655 22:58:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.655 22:58:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:29.655 22:58:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.200 22:58:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:32.200 00:31:32.200 real 0m42.628s 00:31:32.200 user 1m5.093s 00:31:32.200 sys 0m12.847s 00:31:32.200 22:58:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:32.200 22:58:16 -- common/autotest_common.sh@10 -- # set +x 00:31:32.200 ************************************ 00:31:32.200 END TEST nvmf_digest 00:31:32.200 ************************************ 00:31:32.200 22:58:16 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:31:32.200 22:58:16 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:31:32.200 22:58:16 -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:31:32.200 22:58:16 -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:32.200 22:58:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:32.200 22:58:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:32.200 22:58:16 -- common/autotest_common.sh@10 -- # set +x 00:31:32.200 ************************************ 00:31:32.200 START TEST nvmf_bdevperf 00:31:32.200 ************************************ 00:31:32.200 22:58:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:32.200 * Looking for test storage... 00:31:32.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:32.200 22:58:16 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:32.200 22:58:16 -- nvmf/common.sh@7 -- # uname -s 00:31:32.200 22:58:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:32.200 22:58:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:32.200 22:58:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:32.200 22:58:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:32.200 22:58:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:32.200 22:58:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:32.200 22:58:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:32.200 22:58:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:32.200 22:58:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:32.200 22:58:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:32.200 22:58:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:32.200 22:58:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:32.200 22:58:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:32.200 22:58:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:32.200 22:58:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:32.200 22:58:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:32.200 22:58:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:32.200 22:58:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:32.200 22:58:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:32.200 22:58:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.200 22:58:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.200 22:58:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.200 22:58:16 -- paths/export.sh@5 -- # export PATH 00:31:32.200 22:58:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.200 22:58:16 -- nvmf/common.sh@46 -- # : 0 00:31:32.200 22:58:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:32.200 22:58:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:32.200 22:58:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:32.200 22:58:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:32.200 22:58:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:32.200 22:58:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:32.200 22:58:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:32.200 22:58:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:32.200 22:58:16 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:32.200 22:58:16 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:32.200 22:58:16 -- host/bdevperf.sh@24 -- # nvmftestinit 00:31:32.200 22:58:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:32.200 22:58:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:32.200 22:58:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:32.200 22:58:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:32.200 22:58:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:32.200 22:58:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.200 22:58:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:32.200 22:58:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.200 22:58:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:32.200 22:58:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:32.200 22:58:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:32.200 22:58:16 -- common/autotest_common.sh@10 -- # set +x 00:31:40.347 22:58:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:40.347 22:58:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:40.347 22:58:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:40.347 22:58:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:40.347 22:58:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:40.347 22:58:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:40.347 22:58:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:40.347 22:58:24 -- nvmf/common.sh@294 -- # net_devs=() 00:31:40.347 22:58:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:40.347 22:58:24 -- nvmf/common.sh@295 -- # e810=() 00:31:40.347 22:58:24 -- nvmf/common.sh@295 -- # local -ga e810 00:31:40.347 22:58:24 -- nvmf/common.sh@296 -- # x722=() 00:31:40.347 22:58:24 -- nvmf/common.sh@296 -- # local -ga x722 00:31:40.347 22:58:24 -- nvmf/common.sh@297 -- # mlx=() 00:31:40.347 22:58:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:40.347 22:58:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:40.347 22:58:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:40.347 22:58:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:40.347 22:58:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:40.347 22:58:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:40.347 22:58:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:40.347 22:58:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:40.347 22:58:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:40.347 22:58:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:40.347 22:58:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:40.347 22:58:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:40.347 22:58:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:40.347 22:58:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:40.347 22:58:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:40.347 22:58:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:40.347 22:58:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:40.347 22:58:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:40.347 22:58:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:40.347 22:58:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:40.347 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:40.347 22:58:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:40.347 22:58:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:40.347 22:58:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.347 22:58:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.347 22:58:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:40.347 22:58:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:40.347 22:58:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:40.347 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:40.347 22:58:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:40.347 22:58:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:40.347 22:58:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.347 22:58:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.347 22:58:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:40.347 22:58:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:40.347 22:58:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:40.347 22:58:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:40.347 22:58:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:40.347 22:58:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.347 22:58:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:40.347 22:58:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.347 22:58:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:40.347 Found net devices under 0000:31:00.0: cvl_0_0 00:31:40.347 22:58:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.347 22:58:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:40.347 22:58:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.347 22:58:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:40.348 22:58:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.348 22:58:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:40.348 Found net devices under 0000:31:00.1: cvl_0_1 00:31:40.348 22:58:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.348 22:58:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:40.348 22:58:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:40.348 22:58:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:40.348 22:58:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:40.348 22:58:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:40.348 22:58:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:40.348 22:58:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:40.348 22:58:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:40.348 22:58:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:40.348 22:58:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:40.348 22:58:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:40.348 22:58:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:40.348 22:58:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:40.348 22:58:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:40.348 22:58:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:40.348 22:58:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:40.348 22:58:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:40.348 22:58:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:40.348 22:58:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:40.348 22:58:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:40.348 22:58:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:40.348 22:58:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:40.348 22:58:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:40.348 22:58:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:40.348 22:58:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:40.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:40.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:31:40.348 00:31:40.348 --- 10.0.0.2 ping statistics --- 00:31:40.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.348 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:31:40.348 22:58:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:40.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:40.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:31:40.348 00:31:40.348 --- 10.0.0.1 ping statistics --- 00:31:40.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.348 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:31:40.348 22:58:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:40.348 22:58:24 -- nvmf/common.sh@410 -- # return 0 00:31:40.348 22:58:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:40.348 22:58:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:40.348 22:58:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:40.348 22:58:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:40.348 22:58:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:40.348 22:58:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:40.348 22:58:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:40.348 22:58:24 -- host/bdevperf.sh@25 -- # tgt_init 00:31:40.348 22:58:24 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:40.348 22:58:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:40.348 22:58:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:40.348 22:58:24 -- common/autotest_common.sh@10 -- # set +x 00:31:40.348 22:58:24 -- nvmf/common.sh@469 -- # nvmfpid=1329432 00:31:40.348 22:58:24 -- nvmf/common.sh@470 -- # waitforlisten 1329432 00:31:40.348 22:58:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:40.348 22:58:24 -- common/autotest_common.sh@819 -- # '[' -z 1329432 ']' 00:31:40.348 22:58:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.348 22:58:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:40.348 22:58:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.348 22:58:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:40.348 22:58:24 -- common/autotest_common.sh@10 -- # set +x 00:31:40.348 [2024-04-15 22:58:24.708959] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:31:40.348 [2024-04-15 22:58:24.709043] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.348 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.348 [2024-04-15 22:58:24.792131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:40.348 [2024-04-15 22:58:24.864621] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:40.348 [2024-04-15 22:58:24.864745] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.348 [2024-04-15 22:58:24.864753] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.348 [2024-04-15 22:58:24.864760] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.348 [2024-04-15 22:58:24.864872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:40.348 [2024-04-15 22:58:24.865033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.348 [2024-04-15 22:58:24.865033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:40.920 22:58:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:40.920 22:58:25 -- common/autotest_common.sh@852 -- # return 0 00:31:40.920 22:58:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:40.920 22:58:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:40.920 22:58:25 -- common/autotest_common.sh@10 -- # set +x 00:31:40.920 22:58:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:40.920 22:58:25 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:40.920 22:58:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.920 22:58:25 -- common/autotest_common.sh@10 -- # set +x 00:31:40.920 [2024-04-15 22:58:25.528696] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.920 22:58:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.920 22:58:25 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:40.920 22:58:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.920 22:58:25 -- common/autotest_common.sh@10 -- # set +x 00:31:40.920 Malloc0 00:31:40.920 22:58:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.920 22:58:25 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:40.920 22:58:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.920 22:58:25 -- common/autotest_common.sh@10 -- # set +x 00:31:40.920 22:58:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.920 22:58:25 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:40.920 22:58:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.920 22:58:25 -- common/autotest_common.sh@10 -- # set +x 00:31:40.920 22:58:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.920 22:58:25 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:40.920 22:58:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.920 22:58:25 -- common/autotest_common.sh@10 -- # set +x 00:31:40.920 [2024-04-15 22:58:25.598022] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.920 22:58:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.920 22:58:25 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:31:40.920 22:58:25 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:31:40.920 22:58:25 -- nvmf/common.sh@520 -- # config=() 00:31:40.920 22:58:25 -- nvmf/common.sh@520 -- # local subsystem config 00:31:40.920 22:58:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:40.920 22:58:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:40.920 { 00:31:40.920 "params": { 00:31:40.920 "name": "Nvme$subsystem", 00:31:40.920 "trtype": "$TEST_TRANSPORT", 00:31:40.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.920 "adrfam": "ipv4", 00:31:40.920 "trsvcid": "$NVMF_PORT", 00:31:40.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.920 "hdgst": ${hdgst:-false}, 00:31:40.920 "ddgst": ${ddgst:-false} 00:31:40.920 }, 00:31:40.920 "method": "bdev_nvme_attach_controller" 00:31:40.920 } 00:31:40.920 EOF 00:31:40.920 )") 00:31:40.920 22:58:25 -- nvmf/common.sh@542 -- # cat 00:31:40.920 22:58:25 -- nvmf/common.sh@544 -- # jq . 00:31:40.920 22:58:25 -- nvmf/common.sh@545 -- # IFS=, 00:31:40.920 22:58:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:40.920 "params": { 00:31:40.920 "name": "Nvme1", 00:31:40.920 "trtype": "tcp", 00:31:40.920 "traddr": "10.0.0.2", 00:31:40.920 "adrfam": "ipv4", 00:31:40.920 "trsvcid": "4420", 00:31:40.920 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:40.920 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:40.920 "hdgst": false, 00:31:40.920 "ddgst": false 00:31:40.920 }, 00:31:40.920 "method": "bdev_nvme_attach_controller" 00:31:40.920 }' 00:31:40.920 [2024-04-15 22:58:25.646893] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:31:40.921 [2024-04-15 22:58:25.646945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329667 ] 00:31:40.921 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.921 [2024-04-15 22:58:25.712079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.181 [2024-04-15 22:58:25.774794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.182 Running I/O for 1 seconds... 00:31:42.128 00:31:42.128 Latency(us) 00:31:42.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.128 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:42.128 Verification LBA range: start 0x0 length 0x4000 00:31:42.128 Nvme1n1 : 1.01 13935.67 54.44 0.00 0.00 9142.83 1474.56 16820.91 00:31:42.128 =================================================================================================================== 00:31:42.128 Total : 13935.67 54.44 0.00 0.00 9142.83 1474.56 16820.91 00:31:42.390 22:58:27 -- host/bdevperf.sh@30 -- # bdevperfpid=1329850 00:31:42.390 22:58:27 -- host/bdevperf.sh@32 -- # sleep 3 00:31:42.390 22:58:27 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:31:42.390 22:58:27 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:31:42.390 22:58:27 -- nvmf/common.sh@520 -- # config=() 00:31:42.390 22:58:27 -- nvmf/common.sh@520 -- # local subsystem config 00:31:42.390 22:58:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:42.390 22:58:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:42.390 { 00:31:42.390 "params": { 00:31:42.390 "name": "Nvme$subsystem", 00:31:42.390 "trtype": "$TEST_TRANSPORT", 00:31:42.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.390 "adrfam": "ipv4", 00:31:42.390 "trsvcid": "$NVMF_PORT", 00:31:42.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.390 "hdgst": ${hdgst:-false}, 00:31:42.390 "ddgst": ${ddgst:-false} 00:31:42.390 }, 00:31:42.390 "method": "bdev_nvme_attach_controller" 00:31:42.390 } 00:31:42.390 EOF 00:31:42.390 )") 00:31:42.390 22:58:27 -- nvmf/common.sh@542 -- # cat 00:31:42.390 22:58:27 -- nvmf/common.sh@544 -- # jq . 00:31:42.390 22:58:27 -- nvmf/common.sh@545 -- # IFS=, 00:31:42.390 22:58:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:42.390 "params": { 00:31:42.390 "name": "Nvme1", 00:31:42.390 "trtype": "tcp", 00:31:42.390 "traddr": "10.0.0.2", 00:31:42.390 "adrfam": "ipv4", 00:31:42.390 "trsvcid": "4420", 00:31:42.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:42.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:42.390 "hdgst": false, 00:31:42.390 "ddgst": false 00:31:42.390 }, 00:31:42.390 "method": "bdev_nvme_attach_controller" 00:31:42.390 }' 00:31:42.390 [2024-04-15 22:58:27.110173] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:31:42.390 [2024-04-15 22:58:27.110274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329850 ] 00:31:42.390 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.390 [2024-04-15 22:58:27.180570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.652 [2024-04-15 22:58:27.242481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.914 Running I/O for 15 seconds... 00:31:45.492 22:58:30 -- host/bdevperf.sh@33 -- # kill -9 1329432 00:31:45.492 22:58:30 -- host/bdevperf.sh@35 -- # sleep 3 00:31:45.492 [2024-04-15 22:58:30.074639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.492 [2024-04-15 22:58:30.074679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.492 [2024-04-15 22:58:30.074702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.492 [2024-04-15 22:58:30.074714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.492 [2024-04-15 22:58:30.074727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.492 [2024-04-15 22:58:30.074738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.492 [2024-04-15 22:58:30.074752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.492 [2024-04-15 22:58:30.074762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.492 [2024-04-15 22:58:30.074773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.492 [2024-04-15 22:58:30.074781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.492 [2024-04-15 22:58:30.074793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.492 [2024-04-15 22:58:30.074803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.492 [2024-04-15 22:58:30.074814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.492 [2024-04-15 22:58:30.074823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.492 [2024-04-15 22:58:30.074834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.492 [2024-04-15 22:58:30.074843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.492 [2024-04-15 22:58:30.074859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.492 [2024-04-15 22:58:30.074867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.492 [2024-04-15 22:58:30.074879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.492 [2024-04-15 22:58:30.074887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.492 [2024-04-15 22:58:30.074897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.492 [2024-04-15 22:58:30.074904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.492 [2024-04-15 22:58:30.074915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.492 [2024-04-15 22:58:30.074927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.492 [2024-04-15 22:58:30.074937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.492 [2024-04-15 22:58:30.074947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.492 [2024-04-15 22:58:30.074960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.492 [2024-04-15 22:58:30.074970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.492 [2024-04-15 22:58:30.074982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.492 [2024-04-15 22:58:30.074995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.492 [2024-04-15 22:58:30.075007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.492 [2024-04-15 22:58:30.075017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.492 [2024-04-15 22:58:30.075030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.492 [2024-04-15 22:58:30.075039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.492 [2024-04-15 22:58:30.075052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.492 [2024-04-15 22:58:30.075061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.493 [2024-04-15 22:58:30.075282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.493 [2024-04-15 22:58:30.075450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.493 [2024-04-15 22:58:30.075483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.493 [2024-04-15 22:58:30.075515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.493 [2024-04-15 22:58:30.075532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.493 [2024-04-15 22:58:30.075604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.493 [2024-04-15 22:58:30.075623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.493 [2024-04-15 22:58:30.075639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.493 [2024-04-15 22:58:30.075746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.493 [2024-04-15 22:58:30.075753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.075762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.075770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.075779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.075786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.075795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.075802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.075811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.075818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.075829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.494 [2024-04-15 22:58:30.075837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.075846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.494 [2024-04-15 22:58:30.075854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.075863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.075870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.075880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.494 [2024-04-15 22:58:30.075887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.075896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.494 [2024-04-15 22:58:30.075903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.075912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.075920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.075929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.494 [2024-04-15 22:58:30.075936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.075945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.075952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.075962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.494 [2024-04-15 22:58:30.075969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.075978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.075985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.075994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.494 [2024-04-15 22:58:30.076002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.076018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.494 [2024-04-15 22:58:30.076036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.076052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.076069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.494 [2024-04-15 22:58:30.076086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.494 [2024-04-15 22:58:30.076102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.494 [2024-04-15 22:58:30.076117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.494 [2024-04-15 22:58:30.076134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.494 [2024-04-15 22:58:30.076150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.494 [2024-04-15 22:58:30.076166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.076183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.494 [2024-04-15 22:58:30.076199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.494 [2024-04-15 22:58:30.076215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.076232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.076250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.076266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.076282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.076298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.076317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.076333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.076350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.076367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.076383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.494 [2024-04-15 22:58:30.076399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.494 [2024-04-15 22:58:30.076417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.494 [2024-04-15 22:58:30.076426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.495 [2024-04-15 22:58:30.076638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.495 [2024-04-15 22:58:30.076672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.495 [2024-04-15 22:58:30.076689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.495 [2024-04-15 22:58:30.076705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.495 [2024-04-15 22:58:30.076722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.495 [2024-04-15 22:58:30.076754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.495 [2024-04-15 22:58:30.076906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd0810 is same with the state(5) to be set 00:31:45.495 [2024-04-15 22:58:30.076925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:45.495 [2024-04-15 22:58:30.076931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:45.495 [2024-04-15 22:58:30.076938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23128 len:8 PRP1 0x0 PRP2 0x0 00:31:45.495 [2024-04-15 22:58:30.076946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.495 [2024-04-15 22:58:30.076985] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfd0810 was disconnected and freed. reset controller. 00:31:45.495 [2024-04-15 22:58:30.079398] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.495 [2024-04-15 22:58:30.079444] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.495 [2024-04-15 22:58:30.080132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.495 [2024-04-15 22:58:30.080351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.495 [2024-04-15 22:58:30.080365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.495 [2024-04-15 22:58:30.080374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.495 [2024-04-15 22:58:30.080503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.495 [2024-04-15 22:58:30.080717] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.495 [2024-04-15 22:58:30.080727] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.495 [2024-04-15 22:58:30.080736] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.495 [2024-04-15 22:58:30.083042] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.495 [2024-04-15 22:58:30.092316] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.495 [2024-04-15 22:58:30.092900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.495 [2024-04-15 22:58:30.093272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.495 [2024-04-15 22:58:30.093285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.495 [2024-04-15 22:58:30.093294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.495 [2024-04-15 22:58:30.093403] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.495 [2024-04-15 22:58:30.093578] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.495 [2024-04-15 22:58:30.093587] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.495 [2024-04-15 22:58:30.093594] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.495 [2024-04-15 22:58:30.095816] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.496 [2024-04-15 22:58:30.104751] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.496 [2024-04-15 22:58:30.105434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.496 [2024-04-15 22:58:30.105871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.496 [2024-04-15 22:58:30.105885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.496 [2024-04-15 22:58:30.105895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.496 [2024-04-15 22:58:30.106077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.496 [2024-04-15 22:58:30.106206] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.496 [2024-04-15 22:58:30.106215] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.496 [2024-04-15 22:58:30.106223] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.496 [2024-04-15 22:58:30.108496] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.496 [2024-04-15 22:58:30.117124] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.496 [2024-04-15 22:58:30.117749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.496 [2024-04-15 22:58:30.118137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.496 [2024-04-15 22:58:30.118150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.496 [2024-04-15 22:58:30.118160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.496 [2024-04-15 22:58:30.118343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.496 [2024-04-15 22:58:30.118472] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.496 [2024-04-15 22:58:30.118480] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.496 [2024-04-15 22:58:30.118488] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.496 [2024-04-15 22:58:30.120739] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.496 [2024-04-15 22:58:30.129360] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.496 [2024-04-15 22:58:30.129898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.496 [2024-04-15 22:58:30.130127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.496 [2024-04-15 22:58:30.130144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.496 [2024-04-15 22:58:30.130153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.496 [2024-04-15 22:58:30.130336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.496 [2024-04-15 22:58:30.130466] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.496 [2024-04-15 22:58:30.130474] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.496 [2024-04-15 22:58:30.130482] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.496 [2024-04-15 22:58:30.132746] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.496 [2024-04-15 22:58:30.141959] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.496 [2024-04-15 22:58:30.142591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.496 [2024-04-15 22:58:30.142982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.496 [2024-04-15 22:58:30.142994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.496 [2024-04-15 22:58:30.143004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.496 [2024-04-15 22:58:30.143205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.496 [2024-04-15 22:58:30.143334] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.496 [2024-04-15 22:58:30.143342] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.496 [2024-04-15 22:58:30.143350] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.496 [2024-04-15 22:58:30.145593] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.496 [2024-04-15 22:58:30.154583] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.496 [2024-04-15 22:58:30.155211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.496 [2024-04-15 22:58:30.155581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.496 [2024-04-15 22:58:30.155595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.496 [2024-04-15 22:58:30.155605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.496 [2024-04-15 22:58:30.155750] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.496 [2024-04-15 22:58:30.155897] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.496 [2024-04-15 22:58:30.155905] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.496 [2024-04-15 22:58:30.155913] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.496 [2024-04-15 22:58:30.158058] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.496 [2024-04-15 22:58:30.167176] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.496 [2024-04-15 22:58:30.167843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.496 [2024-04-15 22:58:30.168221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.496 [2024-04-15 22:58:30.168233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.496 [2024-04-15 22:58:30.168243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.496 [2024-04-15 22:58:30.168406] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.496 [2024-04-15 22:58:30.168580] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.496 [2024-04-15 22:58:30.168589] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.496 [2024-04-15 22:58:30.168596] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.496 [2024-04-15 22:58:30.170868] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.496 [2024-04-15 22:58:30.179548] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.496 [2024-04-15 22:58:30.180151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.496 [2024-04-15 22:58:30.180429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.496 [2024-04-15 22:58:30.180442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.496 [2024-04-15 22:58:30.180451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.496 [2024-04-15 22:58:30.180603] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.496 [2024-04-15 22:58:30.180751] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.496 [2024-04-15 22:58:30.180760] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.496 [2024-04-15 22:58:30.180767] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.496 [2024-04-15 22:58:30.183078] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.496 [2024-04-15 22:58:30.192123] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.496 [2024-04-15 22:58:30.192606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.496 [2024-04-15 22:58:30.193036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.496 [2024-04-15 22:58:30.193046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.496 [2024-04-15 22:58:30.193054] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.496 [2024-04-15 22:58:30.193237] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.496 [2024-04-15 22:58:30.193418] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.496 [2024-04-15 22:58:30.193432] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.496 [2024-04-15 22:58:30.193440] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.496 [2024-04-15 22:58:30.195580] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.496 [2024-04-15 22:58:30.204879] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.496 [2024-04-15 22:58:30.205371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.497 [2024-04-15 22:58:30.205757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.497 [2024-04-15 22:58:30.205767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.497 [2024-04-15 22:58:30.205775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.497 [2024-04-15 22:58:30.205993] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.497 [2024-04-15 22:58:30.206119] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.497 [2024-04-15 22:58:30.206127] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.497 [2024-04-15 22:58:30.206134] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.497 [2024-04-15 22:58:30.208418] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.497 [2024-04-15 22:58:30.217395] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.497 [2024-04-15 22:58:30.217933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.497 [2024-04-15 22:58:30.218314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.497 [2024-04-15 22:58:30.218328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.497 [2024-04-15 22:58:30.218336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.497 [2024-04-15 22:58:30.218498] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.497 [2024-04-15 22:58:30.218609] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.497 [2024-04-15 22:58:30.218617] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.497 [2024-04-15 22:58:30.218624] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.497 [2024-04-15 22:58:30.221102] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.497 [2024-04-15 22:58:30.229981] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.497 [2024-04-15 22:58:30.230424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.497 [2024-04-15 22:58:30.230871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.497 [2024-04-15 22:58:30.230885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.497 [2024-04-15 22:58:30.230894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.497 [2024-04-15 22:58:30.231077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.497 [2024-04-15 22:58:30.231206] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.497 [2024-04-15 22:58:30.231214] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.497 [2024-04-15 22:58:30.231222] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.497 [2024-04-15 22:58:30.233496] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.497 [2024-04-15 22:58:30.242539] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.497 [2024-04-15 22:58:30.243155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.497 [2024-04-15 22:58:30.243535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.497 [2024-04-15 22:58:30.243554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.497 [2024-04-15 22:58:30.243563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.497 [2024-04-15 22:58:30.243727] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.497 [2024-04-15 22:58:30.243874] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.497 [2024-04-15 22:58:30.243882] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.497 [2024-04-15 22:58:30.243890] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.497 [2024-04-15 22:58:30.246164] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.497 [2024-04-15 22:58:30.255139] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.497 [2024-04-15 22:58:30.255766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.497 [2024-04-15 22:58:30.256148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.497 [2024-04-15 22:58:30.256161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.497 [2024-04-15 22:58:30.256174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.497 [2024-04-15 22:58:30.256320] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.497 [2024-04-15 22:58:30.256504] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.497 [2024-04-15 22:58:30.256512] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.497 [2024-04-15 22:58:30.256519] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.497 [2024-04-15 22:58:30.258987] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.497 [2024-04-15 22:58:30.267683] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.497 [2024-04-15 22:58:30.268120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.497 [2024-04-15 22:58:30.268495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.497 [2024-04-15 22:58:30.268508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.497 [2024-04-15 22:58:30.268517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.497 [2024-04-15 22:58:30.268653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.497 [2024-04-15 22:58:30.268783] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.497 [2024-04-15 22:58:30.268791] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.497 [2024-04-15 22:58:30.268799] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.497 [2024-04-15 22:58:30.271145] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.497 [2024-04-15 22:58:30.280279] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.497 [2024-04-15 22:58:30.280833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.497 [2024-04-15 22:58:30.281217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.497 [2024-04-15 22:58:30.281230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.497 [2024-04-15 22:58:30.281239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.497 [2024-04-15 22:58:30.281385] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.497 [2024-04-15 22:58:30.281495] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.497 [2024-04-15 22:58:30.281503] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.497 [2024-04-15 22:58:30.281510] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.764 [2024-04-15 22:58:30.283823] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.764 [2024-04-15 22:58:30.292748] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.764 [2024-04-15 22:58:30.293266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.764 [2024-04-15 22:58:30.293764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.764 [2024-04-15 22:58:30.293801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.765 [2024-04-15 22:58:30.293812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.765 [2024-04-15 22:58:30.294036] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.765 [2024-04-15 22:58:30.294221] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.765 [2024-04-15 22:58:30.294230] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.765 [2024-04-15 22:58:30.294238] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.765 [2024-04-15 22:58:30.296459] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.765 [2024-04-15 22:58:30.305191] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.765 [2024-04-15 22:58:30.305833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.765 [2024-04-15 22:58:30.306213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.765 [2024-04-15 22:58:30.306226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.765 [2024-04-15 22:58:30.306235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.765 [2024-04-15 22:58:30.306437] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.765 [2024-04-15 22:58:30.306574] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.765 [2024-04-15 22:58:30.306583] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.765 [2024-04-15 22:58:30.306591] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.765 [2024-04-15 22:58:30.308849] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.765 [2024-04-15 22:58:30.317777] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.765 [2024-04-15 22:58:30.318363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.765 [2024-04-15 22:58:30.318633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.765 [2024-04-15 22:58:30.318682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.765 [2024-04-15 22:58:30.318692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.765 [2024-04-15 22:58:30.318856] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.765 [2024-04-15 22:58:30.319022] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.765 [2024-04-15 22:58:30.319031] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.765 [2024-04-15 22:58:30.319038] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.765 [2024-04-15 22:58:30.321083] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.765 [2024-04-15 22:58:30.330440] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.765 [2024-04-15 22:58:30.330985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.765 [2024-04-15 22:58:30.331363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.765 [2024-04-15 22:58:30.331375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.765 [2024-04-15 22:58:30.331385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.765 [2024-04-15 22:58:30.331577] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.765 [2024-04-15 22:58:30.331730] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.765 [2024-04-15 22:58:30.331738] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.765 [2024-04-15 22:58:30.331745] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.765 [2024-04-15 22:58:30.334112] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.765 [2024-04-15 22:58:30.342966] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.765 [2024-04-15 22:58:30.343441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.765 [2024-04-15 22:58:30.343776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.765 [2024-04-15 22:58:30.343813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.765 [2024-04-15 22:58:30.343825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.765 [2024-04-15 22:58:30.343992] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.765 [2024-04-15 22:58:30.344178] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.765 [2024-04-15 22:58:30.344186] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.765 [2024-04-15 22:58:30.344193] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.765 [2024-04-15 22:58:30.346483] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.765 [2024-04-15 22:58:30.355529] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.765 [2024-04-15 22:58:30.356011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.765 [2024-04-15 22:58:30.356384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.765 [2024-04-15 22:58:30.356394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.765 [2024-04-15 22:58:30.356402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.765 [2024-04-15 22:58:30.356491] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.765 [2024-04-15 22:58:30.356661] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.765 [2024-04-15 22:58:30.356670] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.765 [2024-04-15 22:58:30.356676] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.765 [2024-04-15 22:58:30.358943] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.765 [2024-04-15 22:58:30.367937] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.765 [2024-04-15 22:58:30.368557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.765 [2024-04-15 22:58:30.368925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.765 [2024-04-15 22:58:30.368937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.765 [2024-04-15 22:58:30.368947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.765 [2024-04-15 22:58:30.369110] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.765 [2024-04-15 22:58:30.369238] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.765 [2024-04-15 22:58:30.369251] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.765 [2024-04-15 22:58:30.369258] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.765 [2024-04-15 22:58:30.371478] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.765 [2024-04-15 22:58:30.380505] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.765 [2024-04-15 22:58:30.381136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.765 [2024-04-15 22:58:30.381564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.765 [2024-04-15 22:58:30.381577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.765 [2024-04-15 22:58:30.381587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.765 [2024-04-15 22:58:30.381770] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.765 [2024-04-15 22:58:30.381917] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.765 [2024-04-15 22:58:30.381925] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.765 [2024-04-15 22:58:30.381932] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.765 [2024-04-15 22:58:30.384298] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.765 [2024-04-15 22:58:30.393003] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.765 [2024-04-15 22:58:30.393434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.765 [2024-04-15 22:58:30.393661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.765 [2024-04-15 22:58:30.393675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.765 [2024-04-15 22:58:30.393684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.765 [2024-04-15 22:58:30.393829] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.765 [2024-04-15 22:58:30.393958] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.765 [2024-04-15 22:58:30.393966] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.765 [2024-04-15 22:58:30.393974] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.765 [2024-04-15 22:58:30.396395] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.765 [2024-04-15 22:58:30.405664] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.765 [2024-04-15 22:58:30.406226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.765 [2024-04-15 22:58:30.406603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.765 [2024-04-15 22:58:30.406616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.765 [2024-04-15 22:58:30.406626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.765 [2024-04-15 22:58:30.406808] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.765 [2024-04-15 22:58:30.406956] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.765 [2024-04-15 22:58:30.406964] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.766 [2024-04-15 22:58:30.406976] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.766 [2024-04-15 22:58:30.409417] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.766 [2024-04-15 22:58:30.418045] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.766 [2024-04-15 22:58:30.418588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.766 [2024-04-15 22:58:30.418963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.766 [2024-04-15 22:58:30.418973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.766 [2024-04-15 22:58:30.418981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.766 [2024-04-15 22:58:30.419074] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.766 [2024-04-15 22:58:30.419200] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.766 [2024-04-15 22:58:30.419207] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.766 [2024-04-15 22:58:30.419214] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.766 [2024-04-15 22:58:30.421554] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.766 [2024-04-15 22:58:30.430548] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.766 [2024-04-15 22:58:30.431153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.766 [2024-04-15 22:58:30.431515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.766 [2024-04-15 22:58:30.431527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.766 [2024-04-15 22:58:30.431537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.766 [2024-04-15 22:58:30.431709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.766 [2024-04-15 22:58:30.431876] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.766 [2024-04-15 22:58:30.431884] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.766 [2024-04-15 22:58:30.431891] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.766 [2024-04-15 22:58:30.434220] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.766 [2024-04-15 22:58:30.443102] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.766 [2024-04-15 22:58:30.443687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.766 [2024-04-15 22:58:30.444060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.766 [2024-04-15 22:58:30.444072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.766 [2024-04-15 22:58:30.444082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.766 [2024-04-15 22:58:30.444282] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.766 [2024-04-15 22:58:30.444393] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.766 [2024-04-15 22:58:30.444401] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.766 [2024-04-15 22:58:30.444408] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.766 [2024-04-15 22:58:30.446621] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.766 [2024-04-15 22:58:30.455524] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.766 [2024-04-15 22:58:30.456152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.766 [2024-04-15 22:58:30.456520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.766 [2024-04-15 22:58:30.456533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.766 [2024-04-15 22:58:30.456551] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.766 [2024-04-15 22:58:30.456734] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.766 [2024-04-15 22:58:30.456863] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.766 [2024-04-15 22:58:30.456871] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.766 [2024-04-15 22:58:30.456878] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.766 [2024-04-15 22:58:30.459262] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.766 [2024-04-15 22:58:30.468016] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.766 [2024-04-15 22:58:30.468630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.766 [2024-04-15 22:58:30.469004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.766 [2024-04-15 22:58:30.469016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.766 [2024-04-15 22:58:30.469025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.766 [2024-04-15 22:58:30.469189] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.766 [2024-04-15 22:58:30.469301] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.766 [2024-04-15 22:58:30.469309] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.766 [2024-04-15 22:58:30.469316] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.766 [2024-04-15 22:58:30.471634] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.766 [2024-04-15 22:58:30.480276] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.766 [2024-04-15 22:58:30.480783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.766 [2024-04-15 22:58:30.481215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.766 [2024-04-15 22:58:30.481225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.766 [2024-04-15 22:58:30.481233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.766 [2024-04-15 22:58:30.481415] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.766 [2024-04-15 22:58:30.481562] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.766 [2024-04-15 22:58:30.481571] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.766 [2024-04-15 22:58:30.481578] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.766 [2024-04-15 22:58:30.483861] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.766 [2024-04-15 22:58:30.492804] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.766 [2024-04-15 22:58:30.493414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.766 [2024-04-15 22:58:30.493637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.766 [2024-04-15 22:58:30.493650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.766 [2024-04-15 22:58:30.493657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.766 [2024-04-15 22:58:30.493783] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.766 [2024-04-15 22:58:30.493928] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.766 [2024-04-15 22:58:30.493936] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.766 [2024-04-15 22:58:30.493943] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.766 [2024-04-15 22:58:30.496305] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.766 [2024-04-15 22:58:30.505279] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.766 [2024-04-15 22:58:30.505697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.766 [2024-04-15 22:58:30.506043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.766 [2024-04-15 22:58:30.506053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.766 [2024-04-15 22:58:30.506061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.766 [2024-04-15 22:58:30.506224] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.766 [2024-04-15 22:58:30.506367] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.766 [2024-04-15 22:58:30.506377] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.766 [2024-04-15 22:58:30.506384] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.766 [2024-04-15 22:58:30.508357] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.766 [2024-04-15 22:58:30.517908] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.766 [2024-04-15 22:58:30.518349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.766 [2024-04-15 22:58:30.518697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.766 [2024-04-15 22:58:30.518707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.766 [2024-04-15 22:58:30.518714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.766 [2024-04-15 22:58:30.518839] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.766 [2024-04-15 22:58:30.518946] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.766 [2024-04-15 22:58:30.518953] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.766 [2024-04-15 22:58:30.518960] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.766 [2024-04-15 22:58:30.521237] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.766 [2024-04-15 22:58:30.530324] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.766 [2024-04-15 22:58:30.530846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.766 [2024-04-15 22:58:30.531214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.767 [2024-04-15 22:58:30.531224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.767 [2024-04-15 22:58:30.531232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.767 [2024-04-15 22:58:30.531413] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.767 [2024-04-15 22:58:30.531561] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.767 [2024-04-15 22:58:30.531569] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.767 [2024-04-15 22:58:30.531576] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.767 [2024-04-15 22:58:30.533522] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.767 [2024-04-15 22:58:30.542912] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.767 [2024-04-15 22:58:30.543369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.767 [2024-04-15 22:58:30.543793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.767 [2024-04-15 22:58:30.543803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.767 [2024-04-15 22:58:30.543810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.767 [2024-04-15 22:58:30.543973] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.767 [2024-04-15 22:58:30.544135] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.767 [2024-04-15 22:58:30.544144] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.767 [2024-04-15 22:58:30.544152] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.767 [2024-04-15 22:58:30.546457] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.767 [2024-04-15 22:58:30.555250] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.767 [2024-04-15 22:58:30.555681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.767 [2024-04-15 22:58:30.556024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.767 [2024-04-15 22:58:30.556035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.767 [2024-04-15 22:58:30.556042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.767 [2024-04-15 22:58:30.556187] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.767 [2024-04-15 22:58:30.556331] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.767 [2024-04-15 22:58:30.556338] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.767 [2024-04-15 22:58:30.556345] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.767 [2024-04-15 22:58:30.558519] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.767 [2024-04-15 22:58:30.567829] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:45.767 [2024-04-15 22:58:30.568354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.767 [2024-04-15 22:58:30.568731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.767 [2024-04-15 22:58:30.568750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:45.767 [2024-04-15 22:58:30.568760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:45.767 [2024-04-15 22:58:30.568924] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:45.767 [2024-04-15 22:58:30.569118] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:45.767 [2024-04-15 22:58:30.569126] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:45.767 [2024-04-15 22:58:30.569134] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:45.767 [2024-04-15 22:58:30.571430] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.030 [2024-04-15 22:58:30.580391] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.030 [2024-04-15 22:58:30.580949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.030 [2024-04-15 22:58:30.581260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.030 [2024-04-15 22:58:30.581270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.030 [2024-04-15 22:58:30.581278] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.030 [2024-04-15 22:58:30.581424] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.030 [2024-04-15 22:58:30.581575] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.030 [2024-04-15 22:58:30.581583] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.030 [2024-04-15 22:58:30.581590] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.030 [2024-04-15 22:58:30.583859] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.030 [2024-04-15 22:58:30.592804] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.030 [2024-04-15 22:58:30.593347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.030 [2024-04-15 22:58:30.593726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.030 [2024-04-15 22:58:30.593739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.030 [2024-04-15 22:58:30.593749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.030 [2024-04-15 22:58:30.593931] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.030 [2024-04-15 22:58:30.594060] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.030 [2024-04-15 22:58:30.594069] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.030 [2024-04-15 22:58:30.594076] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.030 [2024-04-15 22:58:30.596446] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.030 [2024-04-15 22:58:30.605156] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.030 [2024-04-15 22:58:30.605817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.030 [2024-04-15 22:58:30.606086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.030 [2024-04-15 22:58:30.606102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.030 [2024-04-15 22:58:30.606120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.030 [2024-04-15 22:58:30.606284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.030 [2024-04-15 22:58:30.606451] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.030 [2024-04-15 22:58:30.606460] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.030 [2024-04-15 22:58:30.606467] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.030 [2024-04-15 22:58:30.608784] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.030 [2024-04-15 22:58:30.617502] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.030 [2024-04-15 22:58:30.618153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.030 [2024-04-15 22:58:30.618529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.030 [2024-04-15 22:58:30.618548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.030 [2024-04-15 22:58:30.618558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.030 [2024-04-15 22:58:30.618759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.030 [2024-04-15 22:58:30.618888] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.031 [2024-04-15 22:58:30.618897] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.031 [2024-04-15 22:58:30.618905] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.031 [2024-04-15 22:58:30.621264] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.031 [2024-04-15 22:58:30.629736] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.031 [2024-04-15 22:58:30.630182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.031 [2024-04-15 22:58:30.630547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.031 [2024-04-15 22:58:30.630558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.031 [2024-04-15 22:58:30.630566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.031 [2024-04-15 22:58:30.630729] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.031 [2024-04-15 22:58:30.630872] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.031 [2024-04-15 22:58:30.630880] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.031 [2024-04-15 22:58:30.630887] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.031 [2024-04-15 22:58:30.633063] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.031 [2024-04-15 22:58:30.642223] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.031 [2024-04-15 22:58:30.642796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.031 [2024-04-15 22:58:30.643144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.031 [2024-04-15 22:58:30.643154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.031 [2024-04-15 22:58:30.643161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.031 [2024-04-15 22:58:30.643309] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.031 [2024-04-15 22:58:30.643435] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.031 [2024-04-15 22:58:30.643443] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.031 [2024-04-15 22:58:30.643449] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.031 [2024-04-15 22:58:30.645742] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.031 [2024-04-15 22:58:30.654864] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.031 [2024-04-15 22:58:30.655416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.031 [2024-04-15 22:58:30.655765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.031 [2024-04-15 22:58:30.655776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.031 [2024-04-15 22:58:30.655783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.031 [2024-04-15 22:58:30.655983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.031 [2024-04-15 22:58:30.656146] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.031 [2024-04-15 22:58:30.656154] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.031 [2024-04-15 22:58:30.656161] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.031 [2024-04-15 22:58:30.658482] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.031 [2024-04-15 22:58:30.667308] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.031 [2024-04-15 22:58:30.667677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.031 [2024-04-15 22:58:30.668035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.031 [2024-04-15 22:58:30.668045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.031 [2024-04-15 22:58:30.668052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.031 [2024-04-15 22:58:30.668161] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.031 [2024-04-15 22:58:30.668306] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.031 [2024-04-15 22:58:30.668313] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.031 [2024-04-15 22:58:30.668320] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.031 [2024-04-15 22:58:30.670424] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.031 [2024-04-15 22:58:30.679687] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.031 [2024-04-15 22:58:30.680159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.031 [2024-04-15 22:58:30.680509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.031 [2024-04-15 22:58:30.680519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.031 [2024-04-15 22:58:30.680526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.031 [2024-04-15 22:58:30.680657] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.031 [2024-04-15 22:58:30.680787] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.031 [2024-04-15 22:58:30.680794] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.031 [2024-04-15 22:58:30.680801] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.031 [2024-04-15 22:58:30.683105] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.031 [2024-04-15 22:58:30.692312] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.031 [2024-04-15 22:58:30.692908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.031 [2024-04-15 22:58:30.693193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.031 [2024-04-15 22:58:30.693208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.031 [2024-04-15 22:58:30.693217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.031 [2024-04-15 22:58:30.693419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.031 [2024-04-15 22:58:30.693555] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.031 [2024-04-15 22:58:30.693565] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.031 [2024-04-15 22:58:30.693572] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.031 [2024-04-15 22:58:30.696051] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.031 [2024-04-15 22:58:30.704812] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.031 [2024-04-15 22:58:30.705424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.031 [2024-04-15 22:58:30.705650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.031 [2024-04-15 22:58:30.705665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.031 [2024-04-15 22:58:30.705675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.031 [2024-04-15 22:58:30.705820] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.031 [2024-04-15 22:58:30.705949] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.031 [2024-04-15 22:58:30.705958] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.031 [2024-04-15 22:58:30.705966] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.031 [2024-04-15 22:58:30.708561] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.031 [2024-04-15 22:58:30.717341] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.031 [2024-04-15 22:58:30.717882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.031 [2024-04-15 22:58:30.718230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.031 [2024-04-15 22:58:30.718239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.031 [2024-04-15 22:58:30.718247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.031 [2024-04-15 22:58:30.718336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.031 [2024-04-15 22:58:30.718517] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.031 [2024-04-15 22:58:30.718525] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.031 [2024-04-15 22:58:30.718536] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.031 [2024-04-15 22:58:30.720854] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.031 [2024-04-15 22:58:30.729947] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.031 [2024-04-15 22:58:30.730476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.031 [2024-04-15 22:58:30.730828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.031 [2024-04-15 22:58:30.730839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.031 [2024-04-15 22:58:30.730846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.031 [2024-04-15 22:58:30.730972] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.031 [2024-04-15 22:58:30.731079] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.031 [2024-04-15 22:58:30.731087] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.031 [2024-04-15 22:58:30.731094] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.031 [2024-04-15 22:58:30.733524] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.032 [2024-04-15 22:58:30.742444] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.032 [2024-04-15 22:58:30.742988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.032 [2024-04-15 22:58:30.743251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.032 [2024-04-15 22:58:30.743261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.032 [2024-04-15 22:58:30.743269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.032 [2024-04-15 22:58:30.743412] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.032 [2024-04-15 22:58:30.743561] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.032 [2024-04-15 22:58:30.743570] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.032 [2024-04-15 22:58:30.743576] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.032 [2024-04-15 22:58:30.746045] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.032 [2024-04-15 22:58:30.754890] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.032 [2024-04-15 22:58:30.755384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.032 [2024-04-15 22:58:30.755731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.032 [2024-04-15 22:58:30.755742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.032 [2024-04-15 22:58:30.755749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.032 [2024-04-15 22:58:30.755874] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.032 [2024-04-15 22:58:30.756018] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.032 [2024-04-15 22:58:30.756025] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.032 [2024-04-15 22:58:30.756035] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.032 [2024-04-15 22:58:30.758358] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.032 [2024-04-15 22:58:30.767405] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.032 [2024-04-15 22:58:30.768664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.032 [2024-04-15 22:58:30.769016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.032 [2024-04-15 22:58:30.769027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.032 [2024-04-15 22:58:30.769036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.032 [2024-04-15 22:58:30.769206] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.032 [2024-04-15 22:58:30.769333] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.032 [2024-04-15 22:58:30.769341] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.032 [2024-04-15 22:58:30.769348] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.032 [2024-04-15 22:58:30.771741] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.032 [2024-04-15 22:58:30.779921] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.032 [2024-04-15 22:58:30.780466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.032 [2024-04-15 22:58:30.780837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.032 [2024-04-15 22:58:30.780847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.032 [2024-04-15 22:58:30.780855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.032 [2024-04-15 22:58:30.780981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.032 [2024-04-15 22:58:30.781106] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.032 [2024-04-15 22:58:30.781114] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.032 [2024-04-15 22:58:30.781121] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.032 [2024-04-15 22:58:30.783518] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.032 [2024-04-15 22:58:30.792355] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.032 [2024-04-15 22:58:30.792981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.032 [2024-04-15 22:58:30.793350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.032 [2024-04-15 22:58:30.793363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.032 [2024-04-15 22:58:30.793373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.032 [2024-04-15 22:58:30.793563] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.032 [2024-04-15 22:58:30.793712] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.032 [2024-04-15 22:58:30.793720] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.032 [2024-04-15 22:58:30.793728] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.032 [2024-04-15 22:58:30.796021] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.032 [2024-04-15 22:58:30.804988] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.032 [2024-04-15 22:58:30.805648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.032 [2024-04-15 22:58:30.806003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.032 [2024-04-15 22:58:30.806016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.032 [2024-04-15 22:58:30.806025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.032 [2024-04-15 22:58:30.806151] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.032 [2024-04-15 22:58:30.806355] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.032 [2024-04-15 22:58:30.806363] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.032 [2024-04-15 22:58:30.806371] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.032 [2024-04-15 22:58:30.808483] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.032 [2024-04-15 22:58:30.817539] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.032 [2024-04-15 22:58:30.818157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.032 [2024-04-15 22:58:30.818527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.032 [2024-04-15 22:58:30.818540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.032 [2024-04-15 22:58:30.818557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.032 [2024-04-15 22:58:30.818740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.032 [2024-04-15 22:58:30.818869] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.032 [2024-04-15 22:58:30.818877] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.032 [2024-04-15 22:58:30.818885] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.032 [2024-04-15 22:58:30.821148] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.032 [2024-04-15 22:58:30.829925] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.032 [2024-04-15 22:58:30.830483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.032 [2024-04-15 22:58:30.830872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.032 [2024-04-15 22:58:30.830886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.032 [2024-04-15 22:58:30.830895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.032 [2024-04-15 22:58:30.831059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.032 [2024-04-15 22:58:30.831169] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.032 [2024-04-15 22:58:30.831177] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.032 [2024-04-15 22:58:30.831185] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.032 [2024-04-15 22:58:30.833346] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.295 [2024-04-15 22:58:30.842493] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.295 [2024-04-15 22:58:30.843031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.295 [2024-04-15 22:58:30.843375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.295 [2024-04-15 22:58:30.843385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.295 [2024-04-15 22:58:30.843393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.295 [2024-04-15 22:58:30.843580] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.295 [2024-04-15 22:58:30.843688] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.295 [2024-04-15 22:58:30.843695] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.295 [2024-04-15 22:58:30.843702] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.295 [2024-04-15 22:58:30.846024] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.295 [2024-04-15 22:58:30.855090] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.295 [2024-04-15 22:58:30.855695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.295 [2024-04-15 22:58:30.856115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.295 [2024-04-15 22:58:30.856127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.295 [2024-04-15 22:58:30.856137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.295 [2024-04-15 22:58:30.856244] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.295 [2024-04-15 22:58:30.856354] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.295 [2024-04-15 22:58:30.856362] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.295 [2024-04-15 22:58:30.856369] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.295 [2024-04-15 22:58:30.858554] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.295 [2024-04-15 22:58:30.867642] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.295 [2024-04-15 22:58:30.868171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.295 [2024-04-15 22:58:30.868515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.295 [2024-04-15 22:58:30.868525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.295 [2024-04-15 22:58:30.868532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.295 [2024-04-15 22:58:30.868702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.295 [2024-04-15 22:58:30.868847] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.295 [2024-04-15 22:58:30.868854] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.295 [2024-04-15 22:58:30.868861] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.295 [2024-04-15 22:58:30.870996] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.295 [2024-04-15 22:58:30.879996] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.295 [2024-04-15 22:58:30.880489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.295 [2024-04-15 22:58:30.880862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.295 [2024-04-15 22:58:30.880874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.295 [2024-04-15 22:58:30.880881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.295 [2024-04-15 22:58:30.881007] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.295 [2024-04-15 22:58:30.881132] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.295 [2024-04-15 22:58:30.881140] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.295 [2024-04-15 22:58:30.881146] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.295 [2024-04-15 22:58:30.883622] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.295 [2024-04-15 22:58:30.892459] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.295 [2024-04-15 22:58:30.893079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.295 [2024-04-15 22:58:30.893504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.295 [2024-04-15 22:58:30.893517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.295 [2024-04-15 22:58:30.893527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.295 [2024-04-15 22:58:30.893698] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.295 [2024-04-15 22:58:30.893792] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.295 [2024-04-15 22:58:30.893800] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.295 [2024-04-15 22:58:30.893807] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.295 [2024-04-15 22:58:30.896189] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.295 [2024-04-15 22:58:30.905094] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.295 [2024-04-15 22:58:30.905472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.295 [2024-04-15 22:58:30.905824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.295 [2024-04-15 22:58:30.905835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.295 [2024-04-15 22:58:30.905843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.295 [2024-04-15 22:58:30.905950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.295 [2024-04-15 22:58:30.906095] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.295 [2024-04-15 22:58:30.906102] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.295 [2024-04-15 22:58:30.906109] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.295 [2024-04-15 22:58:30.908318] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.295 [2024-04-15 22:58:30.917683] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.295 [2024-04-15 22:58:30.918061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.295 [2024-04-15 22:58:30.918410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.295 [2024-04-15 22:58:30.918420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.295 [2024-04-15 22:58:30.918432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.295 [2024-04-15 22:58:30.918581] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.295 [2024-04-15 22:58:30.918745] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.295 [2024-04-15 22:58:30.918753] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.295 [2024-04-15 22:58:30.918760] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.295 [2024-04-15 22:58:30.921184] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.295 [2024-04-15 22:58:30.930253] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.295 [2024-04-15 22:58:30.930741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.295 [2024-04-15 22:58:30.931083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.295 [2024-04-15 22:58:30.931092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.295 [2024-04-15 22:58:30.931099] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.296 [2024-04-15 22:58:30.931243] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.296 [2024-04-15 22:58:30.931369] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.296 [2024-04-15 22:58:30.931377] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.296 [2024-04-15 22:58:30.931384] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.296 [2024-04-15 22:58:30.933669] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.296 [2024-04-15 22:58:30.942793] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.296 [2024-04-15 22:58:30.943382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.296 [2024-04-15 22:58:30.943790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.296 [2024-04-15 22:58:30.943805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.296 [2024-04-15 22:58:30.943814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.296 [2024-04-15 22:58:30.943940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.296 [2024-04-15 22:58:30.944032] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.296 [2024-04-15 22:58:30.944040] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.296 [2024-04-15 22:58:30.944048] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.296 [2024-04-15 22:58:30.946327] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.296 [2024-04-15 22:58:30.955157] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.296 [2024-04-15 22:58:30.955665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.296 [2024-04-15 22:58:30.956011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.296 [2024-04-15 22:58:30.956021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.296 [2024-04-15 22:58:30.956028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.296 [2024-04-15 22:58:30.956195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.296 [2024-04-15 22:58:30.956340] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.296 [2024-04-15 22:58:30.956347] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.296 [2024-04-15 22:58:30.956355] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.296 [2024-04-15 22:58:30.958625] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.296 [2024-04-15 22:58:30.967649] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.296 [2024-04-15 22:58:30.968149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.296 [2024-04-15 22:58:30.968495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.296 [2024-04-15 22:58:30.968504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.296 [2024-04-15 22:58:30.968512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.296 [2024-04-15 22:58:30.968734] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.296 [2024-04-15 22:58:30.968842] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.296 [2024-04-15 22:58:30.968850] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.296 [2024-04-15 22:58:30.968857] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.296 [2024-04-15 22:58:30.971156] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.296 [2024-04-15 22:58:30.980406] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.296 [2024-04-15 22:58:30.980867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.296 [2024-04-15 22:58:30.981074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.296 [2024-04-15 22:58:30.981083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.296 [2024-04-15 22:58:30.981091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.296 [2024-04-15 22:58:30.981216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.296 [2024-04-15 22:58:30.981341] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.296 [2024-04-15 22:58:30.981350] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.296 [2024-04-15 22:58:30.981356] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.296 [2024-04-15 22:58:30.983791] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.296 [2024-04-15 22:58:30.993065] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.296 [2024-04-15 22:58:30.993610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.296 [2024-04-15 22:58:30.993980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.296 [2024-04-15 22:58:30.993989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.296 [2024-04-15 22:58:30.993996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.296 [2024-04-15 22:58:30.994159] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.296 [2024-04-15 22:58:30.994306] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.296 [2024-04-15 22:58:30.994314] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.296 [2024-04-15 22:58:30.994321] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.296 [2024-04-15 22:58:30.996794] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.296 [2024-04-15 22:58:31.005337] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.296 [2024-04-15 22:58:31.005948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.296 [2024-04-15 22:58:31.006222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.296 [2024-04-15 22:58:31.006235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.296 [2024-04-15 22:58:31.006244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.296 [2024-04-15 22:58:31.006389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.296 [2024-04-15 22:58:31.006537] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.296 [2024-04-15 22:58:31.006553] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.296 [2024-04-15 22:58:31.006561] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.296 [2024-04-15 22:58:31.008647] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.296 [2024-04-15 22:58:31.017728] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.296 [2024-04-15 22:58:31.018350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.296 [2024-04-15 22:58:31.018671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.296 [2024-04-15 22:58:31.018686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.296 [2024-04-15 22:58:31.018695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.296 [2024-04-15 22:58:31.018840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.296 [2024-04-15 22:58:31.019006] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.296 [2024-04-15 22:58:31.019014] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.296 [2024-04-15 22:58:31.019021] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.296 [2024-04-15 22:58:31.021121] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.296 [2024-04-15 22:58:31.030291] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.296 [2024-04-15 22:58:31.030765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.296 [2024-04-15 22:58:31.031151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.296 [2024-04-15 22:58:31.031160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.296 [2024-04-15 22:58:31.031168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.296 [2024-04-15 22:58:31.031349] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.296 [2024-04-15 22:58:31.031531] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.296 [2024-04-15 22:58:31.031547] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.296 [2024-04-15 22:58:31.031555] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.296 [2024-04-15 22:58:31.033860] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.296 [2024-04-15 22:58:31.042842] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.296 [2024-04-15 22:58:31.043371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.296 [2024-04-15 22:58:31.043640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.296 [2024-04-15 22:58:31.043649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.296 [2024-04-15 22:58:31.043657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.296 [2024-04-15 22:58:31.043801] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.296 [2024-04-15 22:58:31.043926] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.296 [2024-04-15 22:58:31.043934] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.296 [2024-04-15 22:58:31.043941] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.297 [2024-04-15 22:58:31.046432] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.297 [2024-04-15 22:58:31.055291] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.297 [2024-04-15 22:58:31.055863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.297 [2024-04-15 22:58:31.056223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.297 [2024-04-15 22:58:31.056236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.297 [2024-04-15 22:58:31.056245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.297 [2024-04-15 22:58:31.056390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.297 [2024-04-15 22:58:31.056481] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.297 [2024-04-15 22:58:31.056489] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.297 [2024-04-15 22:58:31.056497] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.297 [2024-04-15 22:58:31.058628] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.297 [2024-04-15 22:58:31.067744] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.297 [2024-04-15 22:58:31.068338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.297 [2024-04-15 22:58:31.068715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.297 [2024-04-15 22:58:31.068730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.297 [2024-04-15 22:58:31.068739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.297 [2024-04-15 22:58:31.068884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.297 [2024-04-15 22:58:31.069031] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.297 [2024-04-15 22:58:31.069039] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.297 [2024-04-15 22:58:31.069051] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.297 [2024-04-15 22:58:31.071439] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.297 [2024-04-15 22:58:31.080328] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.297 [2024-04-15 22:58:31.080815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.297 [2024-04-15 22:58:31.081214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.297 [2024-04-15 22:58:31.081224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.297 [2024-04-15 22:58:31.081232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.297 [2024-04-15 22:58:31.081414] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.297 [2024-04-15 22:58:31.081502] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.297 [2024-04-15 22:58:31.081510] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.297 [2024-04-15 22:58:31.081517] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.297 [2024-04-15 22:58:31.083694] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.297 [2024-04-15 22:58:31.092920] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.297 [2024-04-15 22:58:31.093508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.297 [2024-04-15 22:58:31.093934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.297 [2024-04-15 22:58:31.093947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.297 [2024-04-15 22:58:31.093957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.297 [2024-04-15 22:58:31.094139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.297 [2024-04-15 22:58:31.094324] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.297 [2024-04-15 22:58:31.094332] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.297 [2024-04-15 22:58:31.094340] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.297 [2024-04-15 22:58:31.096502] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.559 [2024-04-15 22:58:31.105298] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.559 [2024-04-15 22:58:31.105885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.559 [2024-04-15 22:58:31.106224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.559 [2024-04-15 22:58:31.106234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.559 [2024-04-15 22:58:31.106242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.559 [2024-04-15 22:58:31.106424] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.559 [2024-04-15 22:58:31.106576] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.559 [2024-04-15 22:58:31.106585] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.559 [2024-04-15 22:58:31.106592] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.559 [2024-04-15 22:58:31.108771] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.559 [2024-04-15 22:58:31.117902] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.559 [2024-04-15 22:58:31.118392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.559 [2024-04-15 22:58:31.118740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.559 [2024-04-15 22:58:31.118750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.559 [2024-04-15 22:58:31.118758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.559 [2024-04-15 22:58:31.118901] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.559 [2024-04-15 22:58:31.119063] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.559 [2024-04-15 22:58:31.119071] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.559 [2024-04-15 22:58:31.119077] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.559 [2024-04-15 22:58:31.121279] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.560 [2024-04-15 22:58:31.130451] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.560 [2024-04-15 22:58:31.131010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.131333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.131345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.560 [2024-04-15 22:58:31.131355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.560 [2024-04-15 22:58:31.131499] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.560 [2024-04-15 22:58:31.131616] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.560 [2024-04-15 22:58:31.131625] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.560 [2024-04-15 22:58:31.131632] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.560 [2024-04-15 22:58:31.133831] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.560 [2024-04-15 22:58:31.143045] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.560 [2024-04-15 22:58:31.143540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.143889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.143899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.560 [2024-04-15 22:58:31.143907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.560 [2024-04-15 22:58:31.144070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.560 [2024-04-15 22:58:31.144195] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.560 [2024-04-15 22:58:31.144203] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.560 [2024-04-15 22:58:31.144210] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.560 [2024-04-15 22:58:31.146647] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.560 [2024-04-15 22:58:31.155400] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.560 [2024-04-15 22:58:31.155864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.156208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.156218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.560 [2024-04-15 22:58:31.156226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.560 [2024-04-15 22:58:31.156388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.560 [2024-04-15 22:58:31.156532] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.560 [2024-04-15 22:58:31.156540] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.560 [2024-04-15 22:58:31.156553] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.560 [2024-04-15 22:58:31.158744] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.560 [2024-04-15 22:58:31.167908] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.560 [2024-04-15 22:58:31.168398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.168650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.168661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.560 [2024-04-15 22:58:31.168669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.560 [2024-04-15 22:58:31.168850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.560 [2024-04-15 22:58:31.169031] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.560 [2024-04-15 22:58:31.169038] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.560 [2024-04-15 22:58:31.169045] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.560 [2024-04-15 22:58:31.171291] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.560 [2024-04-15 22:58:31.180395] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.560 [2024-04-15 22:58:31.180889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.181247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.181256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.560 [2024-04-15 22:58:31.181263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.560 [2024-04-15 22:58:31.181425] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.560 [2024-04-15 22:58:31.181555] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.560 [2024-04-15 22:58:31.181563] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.560 [2024-04-15 22:58:31.181569] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.560 [2024-04-15 22:58:31.183871] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.560 [2024-04-15 22:58:31.192753] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.560 [2024-04-15 22:58:31.193340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.193792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.193807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.560 [2024-04-15 22:58:31.193817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.560 [2024-04-15 22:58:31.193943] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.560 [2024-04-15 22:58:31.194053] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.560 [2024-04-15 22:58:31.194061] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.560 [2024-04-15 22:58:31.194068] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.560 [2024-04-15 22:58:31.196437] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.560 [2024-04-15 22:58:31.205140] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.560 [2024-04-15 22:58:31.205815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.206185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.206198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.560 [2024-04-15 22:58:31.206207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.560 [2024-04-15 22:58:31.206314] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.560 [2024-04-15 22:58:31.206462] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.560 [2024-04-15 22:58:31.206470] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.560 [2024-04-15 22:58:31.206478] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.560 [2024-04-15 22:58:31.208832] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.560 [2024-04-15 22:58:31.217839] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.560 [2024-04-15 22:58:31.218372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.218746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.218783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.560 [2024-04-15 22:58:31.218796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.560 [2024-04-15 22:58:31.218962] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.560 [2024-04-15 22:58:31.219110] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.560 [2024-04-15 22:58:31.219120] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.560 [2024-04-15 22:58:31.219128] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.560 [2024-04-15 22:58:31.221509] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.560 [2024-04-15 22:58:31.230449] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.560 [2024-04-15 22:58:31.231010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.231377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.231393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.560 [2024-04-15 22:58:31.231402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.560 [2024-04-15 22:58:31.231574] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.560 [2024-04-15 22:58:31.231723] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.560 [2024-04-15 22:58:31.231731] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.560 [2024-04-15 22:58:31.231738] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.560 [2024-04-15 22:58:31.233991] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.560 [2024-04-15 22:58:31.242995] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.560 [2024-04-15 22:58:31.243560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.243956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.560 [2024-04-15 22:58:31.243965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.561 [2024-04-15 22:58:31.243973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.561 [2024-04-15 22:58:31.244118] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.561 [2024-04-15 22:58:31.244243] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.561 [2024-04-15 22:58:31.244250] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.561 [2024-04-15 22:58:31.244258] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.561 [2024-04-15 22:58:31.246654] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.561 [2024-04-15 22:58:31.255299] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.561 [2024-04-15 22:58:31.255912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.561 [2024-04-15 22:58:31.256285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.561 [2024-04-15 22:58:31.256297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.561 [2024-04-15 22:58:31.256306] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.561 [2024-04-15 22:58:31.256470] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.561 [2024-04-15 22:58:31.256681] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.561 [2024-04-15 22:58:31.256690] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.561 [2024-04-15 22:58:31.256698] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.561 [2024-04-15 22:58:31.258893] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.561 [2024-04-15 22:58:31.267649] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.561 [2024-04-15 22:58:31.268093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.561 [2024-04-15 22:58:31.268484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.561 [2024-04-15 22:58:31.268497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.561 [2024-04-15 22:58:31.268510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.561 [2024-04-15 22:58:31.268700] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.561 [2024-04-15 22:58:31.268848] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.561 [2024-04-15 22:58:31.268856] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.561 [2024-04-15 22:58:31.268864] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.561 [2024-04-15 22:58:31.271174] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.561 [2024-04-15 22:58:31.280001] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.561 [2024-04-15 22:58:31.280481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.561 [2024-04-15 22:58:31.280951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.561 [2024-04-15 22:58:31.280989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.561 [2024-04-15 22:58:31.281000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.561 [2024-04-15 22:58:31.281182] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.561 [2024-04-15 22:58:31.281330] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.561 [2024-04-15 22:58:31.281338] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.561 [2024-04-15 22:58:31.281346] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.561 [2024-04-15 22:58:31.283570] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.561 [2024-04-15 22:58:31.292554] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.561 [2024-04-15 22:58:31.293169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.561 [2024-04-15 22:58:31.293536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.561 [2024-04-15 22:58:31.293556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.561 [2024-04-15 22:58:31.293566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.561 [2024-04-15 22:58:31.293748] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.561 [2024-04-15 22:58:31.293895] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.561 [2024-04-15 22:58:31.293903] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.561 [2024-04-15 22:58:31.293911] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.561 [2024-04-15 22:58:31.296111] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.561 [2024-04-15 22:58:31.305296] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.561 [2024-04-15 22:58:31.305854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.561 [2024-04-15 22:58:31.306224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.561 [2024-04-15 22:58:31.306236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.561 [2024-04-15 22:58:31.306245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.561 [2024-04-15 22:58:31.306375] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.561 [2024-04-15 22:58:31.306504] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.561 [2024-04-15 22:58:31.306512] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.561 [2024-04-15 22:58:31.306520] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.561 [2024-04-15 22:58:31.308797] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.561 [2024-04-15 22:58:31.317633] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.561 [2024-04-15 22:58:31.318254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.561 [2024-04-15 22:58:31.318752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.561 [2024-04-15 22:58:31.318789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.561 [2024-04-15 22:58:31.318801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.561 [2024-04-15 22:58:31.318986] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.561 [2024-04-15 22:58:31.319115] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.561 [2024-04-15 22:58:31.319123] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.561 [2024-04-15 22:58:31.319131] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.561 [2024-04-15 22:58:31.321488] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.561 [2024-04-15 22:58:31.330194] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.561 [2024-04-15 22:58:31.330752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.561 [2024-04-15 22:58:31.331131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.561 [2024-04-15 22:58:31.331144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.561 [2024-04-15 22:58:31.331153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.561 [2024-04-15 22:58:31.331317] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.561 [2024-04-15 22:58:31.331502] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.561 [2024-04-15 22:58:31.331510] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.561 [2024-04-15 22:58:31.331517] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.561 [2024-04-15 22:58:31.333962] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.561 [2024-04-15 22:58:31.342698] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.561 [2024-04-15 22:58:31.343225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.561 [2024-04-15 22:58:31.343568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.561 [2024-04-15 22:58:31.343579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.561 [2024-04-15 22:58:31.343587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.561 [2024-04-15 22:58:31.343750] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.561 [2024-04-15 22:58:31.343921] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.561 [2024-04-15 22:58:31.343929] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.561 [2024-04-15 22:58:31.343936] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.561 [2024-04-15 22:58:31.346333] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.561 [2024-04-15 22:58:31.354981] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.561 [2024-04-15 22:58:31.355472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.561 [2024-04-15 22:58:31.355799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.561 [2024-04-15 22:58:31.355810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.561 [2024-04-15 22:58:31.355817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.561 [2024-04-15 22:58:31.355943] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.561 [2024-04-15 22:58:31.356068] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.561 [2024-04-15 22:58:31.356076] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.562 [2024-04-15 22:58:31.356082] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.562 [2024-04-15 22:58:31.358457] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.824 [2024-04-15 22:58:31.367548] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.824 [2024-04-15 22:58:31.368029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.824 [2024-04-15 22:58:31.368370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.824 [2024-04-15 22:58:31.368379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.824 [2024-04-15 22:58:31.368386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.824 [2024-04-15 22:58:31.368555] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.824 [2024-04-15 22:58:31.368718] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.824 [2024-04-15 22:58:31.368726] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.824 [2024-04-15 22:58:31.368733] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.824 [2024-04-15 22:58:31.371015] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.824 [2024-04-15 22:58:31.380081] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.824 [2024-04-15 22:58:31.380755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.824 [2024-04-15 22:58:31.381130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.824 [2024-04-15 22:58:31.381142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.824 [2024-04-15 22:58:31.381152] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.824 [2024-04-15 22:58:31.381372] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.824 [2024-04-15 22:58:31.381482] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.824 [2024-04-15 22:58:31.381494] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.824 [2024-04-15 22:58:31.381502] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.824 [2024-04-15 22:58:31.384060] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.824 [2024-04-15 22:58:31.392676] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.824 [2024-04-15 22:58:31.393260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.824 [2024-04-15 22:58:31.393711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.824 [2024-04-15 22:58:31.393725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.824 [2024-04-15 22:58:31.393734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.824 [2024-04-15 22:58:31.393916] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.824 [2024-04-15 22:58:31.394045] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.824 [2024-04-15 22:58:31.394053] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.824 [2024-04-15 22:58:31.394061] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.824 [2024-04-15 22:58:31.396409] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.824 [2024-04-15 22:58:31.405054] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.824 [2024-04-15 22:58:31.405644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.824 [2024-04-15 22:58:31.406026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.824 [2024-04-15 22:58:31.406038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.824 [2024-04-15 22:58:31.406047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.824 [2024-04-15 22:58:31.406154] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.824 [2024-04-15 22:58:31.406283] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.824 [2024-04-15 22:58:31.406291] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.824 [2024-04-15 22:58:31.406299] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.824 [2024-04-15 22:58:31.408540] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.824 [2024-04-15 22:58:31.417765] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.824 [2024-04-15 22:58:31.418347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.824 [2024-04-15 22:58:31.418718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.824 [2024-04-15 22:58:31.418731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.824 [2024-04-15 22:58:31.418741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.824 [2024-04-15 22:58:31.418848] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.824 [2024-04-15 22:58:31.418995] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.824 [2024-04-15 22:58:31.419003] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.824 [2024-04-15 22:58:31.419015] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.824 [2024-04-15 22:58:31.421352] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.824 [2024-04-15 22:58:31.430292] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.824 [2024-04-15 22:58:31.430876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.824 [2024-04-15 22:58:31.431248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.824 [2024-04-15 22:58:31.431260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.824 [2024-04-15 22:58:31.431270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.824 [2024-04-15 22:58:31.431396] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.824 [2024-04-15 22:58:31.431553] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.824 [2024-04-15 22:58:31.431563] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.824 [2024-04-15 22:58:31.431570] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.824 [2024-04-15 22:58:31.433971] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.824 [2024-04-15 22:58:31.442594] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.824 [2024-04-15 22:58:31.443174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.824 [2024-04-15 22:58:31.443551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.824 [2024-04-15 22:58:31.443564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.824 [2024-04-15 22:58:31.443574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.824 [2024-04-15 22:58:31.443719] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.824 [2024-04-15 22:58:31.443866] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.824 [2024-04-15 22:58:31.443874] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.824 [2024-04-15 22:58:31.443882] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.824 [2024-04-15 22:58:31.446098] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.824 [2024-04-15 22:58:31.455125] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.824 [2024-04-15 22:58:31.455688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.824 [2024-04-15 22:58:31.456117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.824 [2024-04-15 22:58:31.456130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.824 [2024-04-15 22:58:31.456139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.824 [2024-04-15 22:58:31.456284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.824 [2024-04-15 22:58:31.456432] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.824 [2024-04-15 22:58:31.456440] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.824 [2024-04-15 22:58:31.456447] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.824 [2024-04-15 22:58:31.458727] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.824 [2024-04-15 22:58:31.467605] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.824 [2024-04-15 22:58:31.468224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.824 [2024-04-15 22:58:31.468500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.824 [2024-04-15 22:58:31.468513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.824 [2024-04-15 22:58:31.468522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.824 [2024-04-15 22:58:31.468657] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.824 [2024-04-15 22:58:31.468806] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.825 [2024-04-15 22:58:31.468814] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.825 [2024-04-15 22:58:31.468821] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.825 [2024-04-15 22:58:31.471205] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.825 [2024-04-15 22:58:31.480180] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.825 [2024-04-15 22:58:31.480812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.825 [2024-04-15 22:58:31.481182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.825 [2024-04-15 22:58:31.481195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.825 [2024-04-15 22:58:31.481204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.825 [2024-04-15 22:58:31.481405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.825 [2024-04-15 22:58:31.481534] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.825 [2024-04-15 22:58:31.481551] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.825 [2024-04-15 22:58:31.481560] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.825 [2024-04-15 22:58:31.483779] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.825 [2024-04-15 22:58:31.492677] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.825 [2024-04-15 22:58:31.493231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.825 [2024-04-15 22:58:31.493741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.825 [2024-04-15 22:58:31.493778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.825 [2024-04-15 22:58:31.493789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.825 [2024-04-15 22:58:31.493934] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.825 [2024-04-15 22:58:31.494100] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.825 [2024-04-15 22:58:31.494108] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.825 [2024-04-15 22:58:31.494116] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.825 [2024-04-15 22:58:31.496354] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.825 [2024-04-15 22:58:31.505278] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.825 [2024-04-15 22:58:31.505819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.825 [2024-04-15 22:58:31.506193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.825 [2024-04-15 22:58:31.506203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.825 [2024-04-15 22:58:31.506211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.825 [2024-04-15 22:58:31.506374] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.825 [2024-04-15 22:58:31.506519] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.825 [2024-04-15 22:58:31.506527] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.825 [2024-04-15 22:58:31.506534] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.825 [2024-04-15 22:58:31.508750] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.825 [2024-04-15 22:58:31.517777] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.825 [2024-04-15 22:58:31.518335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.825 [2024-04-15 22:58:31.518685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.825 [2024-04-15 22:58:31.518695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.825 [2024-04-15 22:58:31.518703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.825 [2024-04-15 22:58:31.518828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.825 [2024-04-15 22:58:31.518935] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.825 [2024-04-15 22:58:31.518943] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.825 [2024-04-15 22:58:31.518950] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.825 [2024-04-15 22:58:31.521188] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.825 [2024-04-15 22:58:31.530305] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.825 [2024-04-15 22:58:31.530878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.825 [2024-04-15 22:58:31.531250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.825 [2024-04-15 22:58:31.531263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.825 [2024-04-15 22:58:31.531272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.825 [2024-04-15 22:58:31.531380] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.825 [2024-04-15 22:58:31.531572] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.825 [2024-04-15 22:58:31.531581] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.825 [2024-04-15 22:58:31.531588] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.825 [2024-04-15 22:58:31.533918] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.825 [2024-04-15 22:58:31.542762] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.825 [2024-04-15 22:58:31.543401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.825 [2024-04-15 22:58:31.543787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.825 [2024-04-15 22:58:31.543801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.825 [2024-04-15 22:58:31.543810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.825 [2024-04-15 22:58:31.543937] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.825 [2024-04-15 22:58:31.544065] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.825 [2024-04-15 22:58:31.544074] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.825 [2024-04-15 22:58:31.544081] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.825 [2024-04-15 22:58:31.546355] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.825 [2024-04-15 22:58:31.555227] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.825 [2024-04-15 22:58:31.555693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.825 [2024-04-15 22:58:31.555959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.825 [2024-04-15 22:58:31.555969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.825 [2024-04-15 22:58:31.555977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.825 [2024-04-15 22:58:31.556084] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.825 [2024-04-15 22:58:31.556191] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.825 [2024-04-15 22:58:31.556199] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.825 [2024-04-15 22:58:31.556205] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.825 [2024-04-15 22:58:31.558491] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.825 [2024-04-15 22:58:31.567806] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.825 [2024-04-15 22:58:31.568435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.825 [2024-04-15 22:58:31.568819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.825 [2024-04-15 22:58:31.568833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.825 [2024-04-15 22:58:31.568843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.825 [2024-04-15 22:58:31.569044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.825 [2024-04-15 22:58:31.569191] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.826 [2024-04-15 22:58:31.569200] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.826 [2024-04-15 22:58:31.569207] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.826 [2024-04-15 22:58:31.571351] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.826 [2024-04-15 22:58:31.580430] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.826 [2024-04-15 22:58:31.580846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.826 [2024-04-15 22:58:31.581197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.826 [2024-04-15 22:58:31.581206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.826 [2024-04-15 22:58:31.581218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.826 [2024-04-15 22:58:31.581363] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.826 [2024-04-15 22:58:31.581552] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.826 [2024-04-15 22:58:31.581561] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.826 [2024-04-15 22:58:31.581568] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.826 [2024-04-15 22:58:31.583927] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.826 [2024-04-15 22:58:31.592924] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.826 [2024-04-15 22:58:31.593501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.826 [2024-04-15 22:58:31.593791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.826 [2024-04-15 22:58:31.593807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.826 [2024-04-15 22:58:31.593816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.826 [2024-04-15 22:58:31.594017] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.826 [2024-04-15 22:58:31.594185] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.826 [2024-04-15 22:58:31.594193] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.826 [2024-04-15 22:58:31.594200] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.826 [2024-04-15 22:58:31.596438] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.826 [2024-04-15 22:58:31.605689] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.826 [2024-04-15 22:58:31.606316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.826 [2024-04-15 22:58:31.606690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.826 [2024-04-15 22:58:31.606704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.826 [2024-04-15 22:58:31.606714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.826 [2024-04-15 22:58:31.606896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.826 [2024-04-15 22:58:31.607025] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.826 [2024-04-15 22:58:31.607033] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.826 [2024-04-15 22:58:31.607040] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.826 [2024-04-15 22:58:31.609444] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.826 [2024-04-15 22:58:31.618273] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.826 [2024-04-15 22:58:31.618913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.826 [2024-04-15 22:58:31.619285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.826 [2024-04-15 22:58:31.619298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:46.826 [2024-04-15 22:58:31.619307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:46.826 [2024-04-15 22:58:31.619493] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:46.826 [2024-04-15 22:58:31.619638] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.826 [2024-04-15 22:58:31.619648] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.826 [2024-04-15 22:58:31.619656] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.826 [2024-04-15 22:58:31.621929] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.826 [2024-04-15 22:58:31.630630] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.088 [2024-04-15 22:58:31.631275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.088 [2024-04-15 22:58:31.631651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.088 [2024-04-15 22:58:31.631666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.088 [2024-04-15 22:58:31.631675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.088 [2024-04-15 22:58:31.631820] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.088 [2024-04-15 22:58:31.632006] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.089 [2024-04-15 22:58:31.632014] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.089 [2024-04-15 22:58:31.632022] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.089 [2024-04-15 22:58:31.634207] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.089 [2024-04-15 22:58:31.643141] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.089 [2024-04-15 22:58:31.643466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.089 [2024-04-15 22:58:31.643836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.089 [2024-04-15 22:58:31.643847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.089 [2024-04-15 22:58:31.643855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.089 [2024-04-15 22:58:31.643981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.089 [2024-04-15 22:58:31.644107] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.089 [2024-04-15 22:58:31.644115] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.089 [2024-04-15 22:58:31.644122] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.089 [2024-04-15 22:58:31.646369] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.089 [2024-04-15 22:58:31.655528] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.089 [2024-04-15 22:58:31.656140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.089 [2024-04-15 22:58:31.656515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.089 [2024-04-15 22:58:31.656528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.089 [2024-04-15 22:58:31.656537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.089 [2024-04-15 22:58:31.656709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.089 [2024-04-15 22:58:31.656842] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.089 [2024-04-15 22:58:31.656850] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.089 [2024-04-15 22:58:31.656858] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.089 [2024-04-15 22:58:31.659091] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.089 [2024-04-15 22:58:31.668252] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.089 [2024-04-15 22:58:31.668745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.089 [2024-04-15 22:58:31.669087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.089 [2024-04-15 22:58:31.669096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.089 [2024-04-15 22:58:31.669104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.089 [2024-04-15 22:58:31.669212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.089 [2024-04-15 22:58:31.669319] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.089 [2024-04-15 22:58:31.669327] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.089 [2024-04-15 22:58:31.669333] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.089 [2024-04-15 22:58:31.671547] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.089 [2024-04-15 22:58:31.680956] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.089 [2024-04-15 22:58:31.681585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.089 [2024-04-15 22:58:31.681889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.089 [2024-04-15 22:58:31.681902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.089 [2024-04-15 22:58:31.681912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.089 [2024-04-15 22:58:31.682056] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.089 [2024-04-15 22:58:31.682204] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.089 [2024-04-15 22:58:31.682212] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.089 [2024-04-15 22:58:31.682219] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.089 [2024-04-15 22:58:31.684649] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.089 [2024-04-15 22:58:31.693528] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.089 [2024-04-15 22:58:31.694155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.089 [2024-04-15 22:58:31.694525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.089 [2024-04-15 22:58:31.694538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.089 [2024-04-15 22:58:31.694556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.089 [2024-04-15 22:58:31.694720] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.089 [2024-04-15 22:58:31.694886] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.089 [2024-04-15 22:58:31.694898] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.089 [2024-04-15 22:58:31.694906] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.089 [2024-04-15 22:58:31.697106] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.089 [2024-04-15 22:58:31.706172] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.089 [2024-04-15 22:58:31.706632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.089 [2024-04-15 22:58:31.707049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.089 [2024-04-15 22:58:31.707062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.089 [2024-04-15 22:58:31.707071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.089 [2024-04-15 22:58:31.707291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.089 [2024-04-15 22:58:31.707457] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.089 [2024-04-15 22:58:31.707466] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.089 [2024-04-15 22:58:31.707473] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.089 [2024-04-15 22:58:31.709679] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.089 [2024-04-15 22:58:31.718660] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.089 [2024-04-15 22:58:31.719241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.089 [2024-04-15 22:58:31.719612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.089 [2024-04-15 22:58:31.719626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.089 [2024-04-15 22:58:31.719636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.089 [2024-04-15 22:58:31.719762] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.089 [2024-04-15 22:58:31.719853] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.089 [2024-04-15 22:58:31.719861] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.089 [2024-04-15 22:58:31.719869] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.089 [2024-04-15 22:58:31.722132] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.089 [2024-04-15 22:58:31.731127] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.089 [2024-04-15 22:58:31.731619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.089 [2024-04-15 22:58:31.732004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.089 [2024-04-15 22:58:31.732014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.089 [2024-04-15 22:58:31.732022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.089 [2024-04-15 22:58:31.732166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.089 [2024-04-15 22:58:31.732310] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.089 [2024-04-15 22:58:31.732317] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.089 [2024-04-15 22:58:31.732328] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.089 [2024-04-15 22:58:31.734503] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.089 [2024-04-15 22:58:31.743567] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.089 [2024-04-15 22:58:31.744107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.089 [2024-04-15 22:58:31.744393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.089 [2024-04-15 22:58:31.744406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.089 [2024-04-15 22:58:31.744416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.089 [2024-04-15 22:58:31.744523] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.089 [2024-04-15 22:58:31.744680] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.089 [2024-04-15 22:58:31.744689] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.089 [2024-04-15 22:58:31.744697] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.089 [2024-04-15 22:58:31.747024] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.089 [2024-04-15 22:58:31.756120] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.089 [2024-04-15 22:58:31.756682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.757064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.757076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.090 [2024-04-15 22:58:31.757086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.090 [2024-04-15 22:58:31.757287] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.090 [2024-04-15 22:58:31.757397] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.090 [2024-04-15 22:58:31.757405] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.090 [2024-04-15 22:58:31.757413] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.090 [2024-04-15 22:58:31.759916] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.090 [2024-04-15 22:58:31.768489] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.090 [2024-04-15 22:58:31.769123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.769495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.769507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.090 [2024-04-15 22:58:31.769516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.090 [2024-04-15 22:58:31.769689] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.090 [2024-04-15 22:58:31.769819] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.090 [2024-04-15 22:58:31.769827] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.090 [2024-04-15 22:58:31.769834] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.090 [2024-04-15 22:58:31.772034] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.090 [2024-04-15 22:58:31.780943] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.090 [2024-04-15 22:58:31.781452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.781779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.781793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.090 [2024-04-15 22:58:31.781802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.090 [2024-04-15 22:58:31.782004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.090 [2024-04-15 22:58:31.782207] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.090 [2024-04-15 22:58:31.782215] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.090 [2024-04-15 22:58:31.782223] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.090 [2024-04-15 22:58:31.784496] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.090 [2024-04-15 22:58:31.793412] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.090 [2024-04-15 22:58:31.794023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.794382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.794394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.090 [2024-04-15 22:58:31.794403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.090 [2024-04-15 22:58:31.794594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.090 [2024-04-15 22:58:31.794761] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.090 [2024-04-15 22:58:31.794769] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.090 [2024-04-15 22:58:31.794777] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.090 [2024-04-15 22:58:31.797032] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.090 [2024-04-15 22:58:31.805862] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.090 [2024-04-15 22:58:31.806472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.806840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.806854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.090 [2024-04-15 22:58:31.806864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.090 [2024-04-15 22:58:31.807027] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.090 [2024-04-15 22:58:31.807138] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.090 [2024-04-15 22:58:31.807146] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.090 [2024-04-15 22:58:31.807153] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.090 [2024-04-15 22:58:31.809370] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.090 [2024-04-15 22:58:31.818509] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.090 [2024-04-15 22:58:31.819137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.819460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.819473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.090 [2024-04-15 22:58:31.819482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.090 [2024-04-15 22:58:31.819655] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.090 [2024-04-15 22:58:31.819803] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.090 [2024-04-15 22:58:31.819811] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.090 [2024-04-15 22:58:31.819818] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.090 [2024-04-15 22:58:31.822136] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.090 [2024-04-15 22:58:31.831018] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.090 [2024-04-15 22:58:31.831648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.831983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.831996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.090 [2024-04-15 22:58:31.832006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.090 [2024-04-15 22:58:31.832169] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.090 [2024-04-15 22:58:31.832354] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.090 [2024-04-15 22:58:31.832362] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.090 [2024-04-15 22:58:31.832370] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.090 [2024-04-15 22:58:31.834653] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.090 [2024-04-15 22:58:31.843387] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.090 [2024-04-15 22:58:31.843968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.844346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.844359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.090 [2024-04-15 22:58:31.844368] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.090 [2024-04-15 22:58:31.844532] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.090 [2024-04-15 22:58:31.844667] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.090 [2024-04-15 22:58:31.844676] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.090 [2024-04-15 22:58:31.844683] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.090 [2024-04-15 22:58:31.846770] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.090 [2024-04-15 22:58:31.855994] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.090 [2024-04-15 22:58:31.856522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.856865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.856879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.090 [2024-04-15 22:58:31.856888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.090 [2024-04-15 22:58:31.857052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.090 [2024-04-15 22:58:31.857218] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.090 [2024-04-15 22:58:31.857226] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.090 [2024-04-15 22:58:31.857233] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.090 [2024-04-15 22:58:31.859305] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.090 [2024-04-15 22:58:31.868537] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.090 [2024-04-15 22:58:31.869117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.869447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.090 [2024-04-15 22:58:31.869459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.090 [2024-04-15 22:58:31.869469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.091 [2024-04-15 22:58:31.869623] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.091 [2024-04-15 22:58:31.869809] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.091 [2024-04-15 22:58:31.869817] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.091 [2024-04-15 22:58:31.869825] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.091 [2024-04-15 22:58:31.872040] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.091 [2024-04-15 22:58:31.881059] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.091 [2024-04-15 22:58:31.881673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.091 [2024-04-15 22:58:31.882051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.091 [2024-04-15 22:58:31.882063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.091 [2024-04-15 22:58:31.882072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.091 [2024-04-15 22:58:31.882218] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.091 [2024-04-15 22:58:31.882384] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.091 [2024-04-15 22:58:31.882392] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.091 [2024-04-15 22:58:31.882399] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.091 [2024-04-15 22:58:31.884736] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.091 [2024-04-15 22:58:31.893620] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.091 [2024-04-15 22:58:31.894196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.091 [2024-04-15 22:58:31.894610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.091 [2024-04-15 22:58:31.894628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.091 [2024-04-15 22:58:31.894638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.091 [2024-04-15 22:58:31.894783] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.091 [2024-04-15 22:58:31.894911] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.091 [2024-04-15 22:58:31.894919] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.091 [2024-04-15 22:58:31.894927] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.353 [2024-04-15 22:58:31.897163] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.353 [2024-04-15 22:58:31.906223] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.353 [2024-04-15 22:58:31.906755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.353 [2024-04-15 22:58:31.907135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.353 [2024-04-15 22:58:31.907145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.353 [2024-04-15 22:58:31.907153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.353 [2024-04-15 22:58:31.907242] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.353 [2024-04-15 22:58:31.907404] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.353 [2024-04-15 22:58:31.907412] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.353 [2024-04-15 22:58:31.907419] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.353 [2024-04-15 22:58:31.909707] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.353 [2024-04-15 22:58:31.918656] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.353 [2024-04-15 22:58:31.919281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.353 [2024-04-15 22:58:31.919606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.353 [2024-04-15 22:58:31.919620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.353 [2024-04-15 22:58:31.919630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.353 [2024-04-15 22:58:31.919775] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.353 [2024-04-15 22:58:31.919922] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.353 [2024-04-15 22:58:31.919930] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.353 [2024-04-15 22:58:31.919938] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.353 [2024-04-15 22:58:31.922183] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.353 [2024-04-15 22:58:31.930985] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.353 [2024-04-15 22:58:31.931580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.353 [2024-04-15 22:58:31.932012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.353 [2024-04-15 22:58:31.932024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.354 [2024-04-15 22:58:31.932042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.354 [2024-04-15 22:58:31.932168] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.354 [2024-04-15 22:58:31.932278] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.354 [2024-04-15 22:58:31.932286] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.354 [2024-04-15 22:58:31.932294] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.354 [2024-04-15 22:58:31.934797] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.354 [2024-04-15 22:58:31.943696] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.354 [2024-04-15 22:58:31.944313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.354 [2024-04-15 22:58:31.944638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.354 [2024-04-15 22:58:31.944652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.354 [2024-04-15 22:58:31.944662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.354 [2024-04-15 22:58:31.944807] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.354 [2024-04-15 22:58:31.944954] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.354 [2024-04-15 22:58:31.944962] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.354 [2024-04-15 22:58:31.944970] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.354 [2024-04-15 22:58:31.947187] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.354 [2024-04-15 22:58:31.956360] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.354 [2024-04-15 22:58:31.956863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.354 [2024-04-15 22:58:31.957228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.354 [2024-04-15 22:58:31.957237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.354 [2024-04-15 22:58:31.957244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.354 [2024-04-15 22:58:31.957370] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.354 [2024-04-15 22:58:31.957533] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.354 [2024-04-15 22:58:31.957540] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.354 [2024-04-15 22:58:31.957554] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.354 [2024-04-15 22:58:31.959950] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.354 [2024-04-15 22:58:31.968637] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.354 [2024-04-15 22:58:31.969185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.354 [2024-04-15 22:58:31.969563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.354 [2024-04-15 22:58:31.969577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.354 [2024-04-15 22:58:31.969586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.354 [2024-04-15 22:58:31.969754] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.354 [2024-04-15 22:58:31.969864] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.354 [2024-04-15 22:58:31.969872] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.354 [2024-04-15 22:58:31.969879] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.354 [2024-04-15 22:58:31.972062] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.354 [2024-04-15 22:58:31.981235] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.354 [2024-04-15 22:58:31.981829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.354 [2024-04-15 22:58:31.982199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.354 [2024-04-15 22:58:31.982211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.354 [2024-04-15 22:58:31.982220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.354 [2024-04-15 22:58:31.982440] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.354 [2024-04-15 22:58:31.982596] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.354 [2024-04-15 22:58:31.982605] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.354 [2024-04-15 22:58:31.982612] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.354 [2024-04-15 22:58:31.984866] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.354 [2024-04-15 22:58:31.993727] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.354 [2024-04-15 22:58:31.994253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.354 [2024-04-15 22:58:31.994627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.354 [2024-04-15 22:58:31.994640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.354 [2024-04-15 22:58:31.994650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.354 [2024-04-15 22:58:31.994832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.354 [2024-04-15 22:58:31.994943] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.354 [2024-04-15 22:58:31.994951] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.354 [2024-04-15 22:58:31.994958] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.354 [2024-04-15 22:58:31.997177] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.354 [2024-04-15 22:58:32.006359] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.354 [2024-04-15 22:58:32.006936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.354 [2024-04-15 22:58:32.007303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.354 [2024-04-15 22:58:32.007315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.354 [2024-04-15 22:58:32.007324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.354 [2024-04-15 22:58:32.007506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.354 [2024-04-15 22:58:32.007705] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.354 [2024-04-15 22:58:32.007714] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.354 [2024-04-15 22:58:32.007722] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.354 [2024-04-15 22:58:32.009921] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.354 [2024-04-15 22:58:32.018929] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.354 [2024-04-15 22:58:32.019310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.354 [2024-04-15 22:58:32.019697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.354 [2024-04-15 22:58:32.019708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.354 [2024-04-15 22:58:32.019716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.354 [2024-04-15 22:58:32.019841] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.354 [2024-04-15 22:58:32.019986] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.354 [2024-04-15 22:58:32.019994] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.354 [2024-04-15 22:58:32.020001] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.354 [2024-04-15 22:58:32.022256] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.354 [2024-04-15 22:58:32.031570] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.354 [2024-04-15 22:58:32.032146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.354 [2024-04-15 22:58:32.032517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.354 [2024-04-15 22:58:32.032529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.354 [2024-04-15 22:58:32.032538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.354 [2024-04-15 22:58:32.032747] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.354 [2024-04-15 22:58:32.032931] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.354 [2024-04-15 22:58:32.032941] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.354 [2024-04-15 22:58:32.032948] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.354 [2024-04-15 22:58:32.035112] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.354 [2024-04-15 22:58:32.044266] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.354 [2024-04-15 22:58:32.044896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.354 [2024-04-15 22:58:32.045267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.354 [2024-04-15 22:58:32.045279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.354 [2024-04-15 22:58:32.045288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.354 [2024-04-15 22:58:32.045489] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.354 [2024-04-15 22:58:32.045625] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.354 [2024-04-15 22:58:32.045638] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.355 [2024-04-15 22:58:32.045646] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.355 [2024-04-15 22:58:32.047937] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.355 [2024-04-15 22:58:32.056816] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.355 [2024-04-15 22:58:32.057435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.355 [2024-04-15 22:58:32.057802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.355 [2024-04-15 22:58:32.057815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.355 [2024-04-15 22:58:32.057825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.355 [2024-04-15 22:58:32.057951] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.355 [2024-04-15 22:58:32.058117] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.355 [2024-04-15 22:58:32.058125] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.355 [2024-04-15 22:58:32.058132] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.355 [2024-04-15 22:58:32.060405] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.355 [2024-04-15 22:58:32.069312] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.355 [2024-04-15 22:58:32.069876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.355 [2024-04-15 22:58:32.070242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.355 [2024-04-15 22:58:32.070254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.355 [2024-04-15 22:58:32.070263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.355 [2024-04-15 22:58:32.070445] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.355 [2024-04-15 22:58:32.070640] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.355 [2024-04-15 22:58:32.070649] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.355 [2024-04-15 22:58:32.070656] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.355 [2024-04-15 22:58:32.072909] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.355 [2024-04-15 22:58:32.081705] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.355 [2024-04-15 22:58:32.082317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.355 [2024-04-15 22:58:32.082718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.355 [2024-04-15 22:58:32.082732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.355 [2024-04-15 22:58:32.082742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.355 [2024-04-15 22:58:32.082906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.355 [2024-04-15 22:58:32.083053] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.355 [2024-04-15 22:58:32.083061] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.355 [2024-04-15 22:58:32.083073] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.355 [2024-04-15 22:58:32.085295] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.355 [2024-04-15 22:58:32.094161] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.355 [2024-04-15 22:58:32.094767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.355 [2024-04-15 22:58:32.095137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.355 [2024-04-15 22:58:32.095149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.355 [2024-04-15 22:58:32.095159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.355 [2024-04-15 22:58:32.095284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.355 [2024-04-15 22:58:32.095395] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.355 [2024-04-15 22:58:32.095403] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.355 [2024-04-15 22:58:32.095410] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.355 [2024-04-15 22:58:32.097725] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.355 [2024-04-15 22:58:32.106671] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.355 [2024-04-15 22:58:32.107050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.355 [2024-04-15 22:58:32.107352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.355 [2024-04-15 22:58:32.107363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.355 [2024-04-15 22:58:32.107371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.355 [2024-04-15 22:58:32.107516] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.355 [2024-04-15 22:58:32.107669] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.355 [2024-04-15 22:58:32.107677] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.355 [2024-04-15 22:58:32.107684] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.355 [2024-04-15 22:58:32.109947] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.355 [2024-04-15 22:58:32.119297] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.355 [2024-04-15 22:58:32.119891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.355 [2024-04-15 22:58:32.120261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.355 [2024-04-15 22:58:32.120273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.355 [2024-04-15 22:58:32.120283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.355 [2024-04-15 22:58:32.120465] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.355 [2024-04-15 22:58:32.120590] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.355 [2024-04-15 22:58:32.120599] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.355 [2024-04-15 22:58:32.120607] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.355 [2024-04-15 22:58:32.122675] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.355 [2024-04-15 22:58:32.132006] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.355 [2024-04-15 22:58:32.132616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.355 [2024-04-15 22:58:32.133032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.355 [2024-04-15 22:58:32.133044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.355 [2024-04-15 22:58:32.133054] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.355 [2024-04-15 22:58:32.133236] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.355 [2024-04-15 22:58:32.133384] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.355 [2024-04-15 22:58:32.133392] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.355 [2024-04-15 22:58:32.133400] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.355 [2024-04-15 22:58:32.135627] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.355 [2024-04-15 22:58:32.144563] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.355 [2024-04-15 22:58:32.145177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.355 [2024-04-15 22:58:32.145557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.355 [2024-04-15 22:58:32.145570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.355 [2024-04-15 22:58:32.145579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.355 [2024-04-15 22:58:32.145742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.355 [2024-04-15 22:58:32.145871] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.355 [2024-04-15 22:58:32.145880] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.355 [2024-04-15 22:58:32.145887] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.355 [2024-04-15 22:58:32.148253] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.355 [2024-04-15 22:58:32.157150] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.355 [2024-04-15 22:58:32.157782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.355 [2024-04-15 22:58:32.158154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.355 [2024-04-15 22:58:32.158166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.355 [2024-04-15 22:58:32.158176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.355 [2024-04-15 22:58:32.158358] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.355 [2024-04-15 22:58:32.158487] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.355 [2024-04-15 22:58:32.158495] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.355 [2024-04-15 22:58:32.158502] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.617 [2024-04-15 22:58:32.160927] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.617 [2024-04-15 22:58:32.169443] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.617 [2024-04-15 22:58:32.170041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.170411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.170423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.617 [2024-04-15 22:58:32.170432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.617 [2024-04-15 22:58:32.170623] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.617 [2024-04-15 22:58:32.170752] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.617 [2024-04-15 22:58:32.170760] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.617 [2024-04-15 22:58:32.170768] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.617 [2024-04-15 22:58:32.173114] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.617 [2024-04-15 22:58:32.182089] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.617 [2024-04-15 22:58:32.182622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.183051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.183064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.617 [2024-04-15 22:58:32.183073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.617 [2024-04-15 22:58:32.183199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.617 [2024-04-15 22:58:32.183346] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.617 [2024-04-15 22:58:32.183354] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.617 [2024-04-15 22:58:32.183362] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.617 [2024-04-15 22:58:32.185474] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.617 [2024-04-15 22:58:32.194507] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.617 [2024-04-15 22:58:32.195141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.195509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.195521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.617 [2024-04-15 22:58:32.195530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.617 [2024-04-15 22:58:32.195683] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.617 [2024-04-15 22:58:32.195831] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.617 [2024-04-15 22:58:32.195840] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.617 [2024-04-15 22:58:32.195847] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.617 [2024-04-15 22:58:32.198045] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.617 [2024-04-15 22:58:32.206913] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.617 [2024-04-15 22:58:32.207385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.207809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.207825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.617 [2024-04-15 22:58:32.207834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.617 [2024-04-15 22:58:32.207942] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.617 [2024-04-15 22:58:32.208107] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.617 [2024-04-15 22:58:32.208116] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.617 [2024-04-15 22:58:32.208123] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.617 [2024-04-15 22:58:32.210507] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.617 [2024-04-15 22:58:32.219324] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.617 [2024-04-15 22:58:32.219916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.220285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.220297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.617 [2024-04-15 22:58:32.220306] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.617 [2024-04-15 22:58:32.220470] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.617 [2024-04-15 22:58:32.220633] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.617 [2024-04-15 22:58:32.220642] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.617 [2024-04-15 22:58:32.220650] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.617 [2024-04-15 22:58:32.223127] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.617 [2024-04-15 22:58:32.231997] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.617 [2024-04-15 22:58:32.232631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.233000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.233013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.617 [2024-04-15 22:58:32.233022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.617 [2024-04-15 22:58:32.233204] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.617 [2024-04-15 22:58:32.233334] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.617 [2024-04-15 22:58:32.233342] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.617 [2024-04-15 22:58:32.233349] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.617 [2024-04-15 22:58:32.235346] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.617 [2024-04-15 22:58:32.244568] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.617 [2024-04-15 22:58:32.245151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.245524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.245536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.617 [2024-04-15 22:58:32.245557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.617 [2024-04-15 22:58:32.245684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.617 [2024-04-15 22:58:32.245850] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.617 [2024-04-15 22:58:32.245859] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.617 [2024-04-15 22:58:32.245866] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.617 [2024-04-15 22:58:32.248047] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.617 [2024-04-15 22:58:32.257033] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.617 [2024-04-15 22:58:32.257615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.258023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.258036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.617 [2024-04-15 22:58:32.258045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.617 [2024-04-15 22:58:32.258133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.617 [2024-04-15 22:58:32.258262] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.617 [2024-04-15 22:58:32.258271] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.617 [2024-04-15 22:58:32.258278] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.617 [2024-04-15 22:58:32.260576] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.617 [2024-04-15 22:58:32.269544] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.617 [2024-04-15 22:58:32.270072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.270413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.270423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.617 [2024-04-15 22:58:32.270431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.617 [2024-04-15 22:58:32.270520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.617 [2024-04-15 22:58:32.270707] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.617 [2024-04-15 22:58:32.270715] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.617 [2024-04-15 22:58:32.270722] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.617 [2024-04-15 22:58:32.273154] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.617 [2024-04-15 22:58:32.282011] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.617 [2024-04-15 22:58:32.282506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.282907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.282917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.617 [2024-04-15 22:58:32.282925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.617 [2024-04-15 22:58:32.283073] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.617 [2024-04-15 22:58:32.283180] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.617 [2024-04-15 22:58:32.283188] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.617 [2024-04-15 22:58:32.283195] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.617 [2024-04-15 22:58:32.285495] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.617 [2024-04-15 22:58:32.294444] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.617 [2024-04-15 22:58:32.294943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.295254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.617 [2024-04-15 22:58:32.295264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.617 [2024-04-15 22:58:32.295271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.617 [2024-04-15 22:58:32.295415] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.617 [2024-04-15 22:58:32.295601] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.617 [2024-04-15 22:58:32.295610] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.617 [2024-04-15 22:58:32.295617] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.618 [2024-04-15 22:58:32.297806] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.618 [2024-04-15 22:58:32.307051] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.618 [2024-04-15 22:58:32.307546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.307872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.307881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.618 [2024-04-15 22:58:32.307889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.618 [2024-04-15 22:58:32.308014] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.618 [2024-04-15 22:58:32.308157] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.618 [2024-04-15 22:58:32.308165] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.618 [2024-04-15 22:58:32.308172] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.618 [2024-04-15 22:58:32.310665] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.618 [2024-04-15 22:58:32.319669] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.618 [2024-04-15 22:58:32.320176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.320555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.320569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.618 [2024-04-15 22:58:32.320579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.618 [2024-04-15 22:58:32.320723] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.618 [2024-04-15 22:58:32.320894] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.618 [2024-04-15 22:58:32.320903] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.618 [2024-04-15 22:58:32.320910] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.618 [2024-04-15 22:58:32.323177] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.618 [2024-04-15 22:58:32.332098] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.618 [2024-04-15 22:58:32.332661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.333082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.333094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.618 [2024-04-15 22:58:32.333104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.618 [2024-04-15 22:58:32.333250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.618 [2024-04-15 22:58:32.333434] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.618 [2024-04-15 22:58:32.333443] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.618 [2024-04-15 22:58:32.333451] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.618 [2024-04-15 22:58:32.335692] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.618 [2024-04-15 22:58:32.344595] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.618 [2024-04-15 22:58:32.345135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.345430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.345440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.618 [2024-04-15 22:58:32.345448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.618 [2024-04-15 22:58:32.345617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.618 [2024-04-15 22:58:32.345725] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.618 [2024-04-15 22:58:32.345733] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.618 [2024-04-15 22:58:32.345740] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.618 [2024-04-15 22:58:32.347802] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.618 [2024-04-15 22:58:32.357134] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.618 [2024-04-15 22:58:32.357646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.358062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.358075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.618 [2024-04-15 22:58:32.358085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.618 [2024-04-15 22:58:32.358268] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.618 [2024-04-15 22:58:32.358452] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.618 [2024-04-15 22:58:32.358465] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.618 [2024-04-15 22:58:32.358473] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.618 [2024-04-15 22:58:32.360678] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.618 [2024-04-15 22:58:32.369454] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.618 [2024-04-15 22:58:32.370088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.370470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.370483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.618 [2024-04-15 22:58:32.370492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.618 [2024-04-15 22:58:32.370662] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.618 [2024-04-15 22:58:32.370810] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.618 [2024-04-15 22:58:32.370818] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.618 [2024-04-15 22:58:32.370826] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.618 [2024-04-15 22:58:32.373153] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.618 [2024-04-15 22:58:32.382093] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.618 [2024-04-15 22:58:32.382408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.382780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.382818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.618 [2024-04-15 22:58:32.382830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.618 [2024-04-15 22:58:32.382957] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.618 [2024-04-15 22:58:32.383141] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.618 [2024-04-15 22:58:32.383150] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.618 [2024-04-15 22:58:32.383157] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.618 [2024-04-15 22:58:32.385526] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.618 [2024-04-15 22:58:32.394616] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.618 [2024-04-15 22:58:32.395153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.395505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.395514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.618 [2024-04-15 22:58:32.395522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.618 [2024-04-15 22:58:32.395673] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.618 [2024-04-15 22:58:32.395836] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.618 [2024-04-15 22:58:32.395844] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.618 [2024-04-15 22:58:32.395855] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.618 [2024-04-15 22:58:32.398307] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.618 [2024-04-15 22:58:32.406927] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.618 [2024-04-15 22:58:32.407393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.407839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.407877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.618 [2024-04-15 22:58:32.407887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.618 [2024-04-15 22:58:32.408032] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.618 [2024-04-15 22:58:32.408161] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.618 [2024-04-15 22:58:32.408169] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.618 [2024-04-15 22:58:32.408177] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.618 [2024-04-15 22:58:32.410401] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.618 [2024-04-15 22:58:32.419376] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.618 [2024-04-15 22:58:32.419908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.420304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.618 [2024-04-15 22:58:32.420313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.618 [2024-04-15 22:58:32.420321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.618 [2024-04-15 22:58:32.420466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.618 [2024-04-15 22:58:32.420633] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.618 [2024-04-15 22:58:32.420641] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.618 [2024-04-15 22:58:32.420648] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.618 [2024-04-15 22:58:32.422906] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.881 [2024-04-15 22:58:32.432049] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.881 [2024-04-15 22:58:32.432422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.881 [2024-04-15 22:58:32.432792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.881 [2024-04-15 22:58:32.432802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.881 [2024-04-15 22:58:32.432809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.881 [2024-04-15 22:58:32.432991] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.881 [2024-04-15 22:58:32.433117] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.881 [2024-04-15 22:58:32.433124] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.881 [2024-04-15 22:58:32.433131] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.881 [2024-04-15 22:58:32.435492] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.881 [2024-04-15 22:58:32.444350] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.881 [2024-04-15 22:58:32.444883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.881 [2024-04-15 22:58:32.445219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.881 [2024-04-15 22:58:32.445229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.881 [2024-04-15 22:58:32.445236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.881 [2024-04-15 22:58:32.445362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.881 [2024-04-15 22:58:32.445524] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.881 [2024-04-15 22:58:32.445532] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.881 [2024-04-15 22:58:32.445539] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.881 [2024-04-15 22:58:32.447548] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.881 [2024-04-15 22:58:32.456861] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.881 [2024-04-15 22:58:32.457403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.881 [2024-04-15 22:58:32.457661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.881 [2024-04-15 22:58:32.457672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.881 [2024-04-15 22:58:32.457679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.881 [2024-04-15 22:58:32.457786] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.881 [2024-04-15 22:58:32.457930] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.881 [2024-04-15 22:58:32.457937] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.881 [2024-04-15 22:58:32.457944] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.881 [2024-04-15 22:58:32.460153] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.881 [2024-04-15 22:58:32.469490] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.881 [2024-04-15 22:58:32.470014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.881 [2024-04-15 22:58:32.470282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.881 [2024-04-15 22:58:32.470292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.881 [2024-04-15 22:58:32.470300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.881 [2024-04-15 22:58:32.470425] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.881 [2024-04-15 22:58:32.470573] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.881 [2024-04-15 22:58:32.470582] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.881 [2024-04-15 22:58:32.470589] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.881 [2024-04-15 22:58:32.472835] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.881 [2024-04-15 22:58:32.482081] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.881 [2024-04-15 22:58:32.482555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.881 [2024-04-15 22:58:32.482838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.881 [2024-04-15 22:58:32.482848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.881 [2024-04-15 22:58:32.482856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.881 [2024-04-15 22:58:32.483018] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.881 [2024-04-15 22:58:32.483162] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.881 [2024-04-15 22:58:32.483169] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.881 [2024-04-15 22:58:32.483176] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.881 [2024-04-15 22:58:32.485532] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.881 [2024-04-15 22:58:32.494414] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.881 [2024-04-15 22:58:32.494822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.881 [2024-04-15 22:58:32.495167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.881 [2024-04-15 22:58:32.495176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.881 [2024-04-15 22:58:32.495184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.881 [2024-04-15 22:58:32.495290] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.881 [2024-04-15 22:58:32.495434] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.881 [2024-04-15 22:58:32.495442] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.881 [2024-04-15 22:58:32.495448] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.881 [2024-04-15 22:58:32.497681] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.881 [2024-04-15 22:58:32.506728] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.881 [2024-04-15 22:58:32.507192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.881 [2024-04-15 22:58:32.507536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.881 [2024-04-15 22:58:32.507550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.881 [2024-04-15 22:58:32.507558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.882 [2024-04-15 22:58:32.507721] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.882 [2024-04-15 22:58:32.507809] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.882 [2024-04-15 22:58:32.507816] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.882 [2024-04-15 22:58:32.507823] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.882 [2024-04-15 22:58:32.510088] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.882 [2024-04-15 22:58:32.519305] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.882 [2024-04-15 22:58:32.519972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.882 [2024-04-15 22:58:32.520345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.882 [2024-04-15 22:58:32.520358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.882 [2024-04-15 22:58:32.520367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.882 [2024-04-15 22:58:32.520531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.882 [2024-04-15 22:58:32.520684] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.882 [2024-04-15 22:58:32.520693] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.882 [2024-04-15 22:58:32.520700] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.882 [2024-04-15 22:58:32.523129] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.882 [2024-04-15 22:58:32.531826] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.882 [2024-04-15 22:58:32.532321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.882 [2024-04-15 22:58:32.532579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.882 [2024-04-15 22:58:32.532590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.882 [2024-04-15 22:58:32.532598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.882 [2024-04-15 22:58:32.532706] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.882 [2024-04-15 22:58:32.532794] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.882 [2024-04-15 22:58:32.532803] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.882 [2024-04-15 22:58:32.532810] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.882 [2024-04-15 22:58:32.535170] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.882 [2024-04-15 22:58:32.544135] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.882 [2024-04-15 22:58:32.544755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.882 [2024-04-15 22:58:32.545228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.882 [2024-04-15 22:58:32.545240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.882 [2024-04-15 22:58:32.545250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.882 [2024-04-15 22:58:32.545432] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.882 [2024-04-15 22:58:32.545624] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.882 [2024-04-15 22:58:32.545632] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.882 [2024-04-15 22:58:32.545640] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.882 [2024-04-15 22:58:32.548132] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.882 [2024-04-15 22:58:32.556619] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.882 [2024-04-15 22:58:32.557016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.882 [2024-04-15 22:58:32.557354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.882 [2024-04-15 22:58:32.557368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.882 [2024-04-15 22:58:32.557376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.882 [2024-04-15 22:58:32.557580] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.882 [2024-04-15 22:58:32.557762] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.882 [2024-04-15 22:58:32.557770] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.882 [2024-04-15 22:58:32.557777] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.882 [2024-04-15 22:58:32.560081] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.882 [2024-04-15 22:58:32.569213] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.882 [2024-04-15 22:58:32.569879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.882 [2024-04-15 22:58:32.570206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.882 [2024-04-15 22:58:32.570219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.882 [2024-04-15 22:58:32.570228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.882 [2024-04-15 22:58:32.570391] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.882 [2024-04-15 22:58:32.570539] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.882 [2024-04-15 22:58:32.570554] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.882 [2024-04-15 22:58:32.570561] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.882 [2024-04-15 22:58:32.572756] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.882 [2024-04-15 22:58:32.581805] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.882 [2024-04-15 22:58:32.582265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.882 [2024-04-15 22:58:32.582614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.882 [2024-04-15 22:58:32.582624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.882 [2024-04-15 22:58:32.582632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.882 [2024-04-15 22:58:32.582758] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.882 [2024-04-15 22:58:32.582864] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.882 [2024-04-15 22:58:32.582872] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.882 [2024-04-15 22:58:32.582879] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.882 [2024-04-15 22:58:32.585106] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.882 [2024-04-15 22:58:32.594408] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.882 [2024-04-15 22:58:32.594908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.882 [2024-04-15 22:58:32.595257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.882 [2024-04-15 22:58:32.595266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.882 [2024-04-15 22:58:32.595277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.882 [2024-04-15 22:58:32.595477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.882 [2024-04-15 22:58:32.595700] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.882 [2024-04-15 22:58:32.595709] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.882 [2024-04-15 22:58:32.595716] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.882 [2024-04-15 22:58:32.598096] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.883 [2024-04-15 22:58:32.607028] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.883 [2024-04-15 22:58:32.607521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.883 [2024-04-15 22:58:32.607871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.883 [2024-04-15 22:58:32.607882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.883 [2024-04-15 22:58:32.607889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.883 [2024-04-15 22:58:32.607997] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.883 [2024-04-15 22:58:32.608159] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.883 [2024-04-15 22:58:32.608167] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.883 [2024-04-15 22:58:32.608174] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.883 [2024-04-15 22:58:32.610495] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.883 [2024-04-15 22:58:32.619463] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.883 [2024-04-15 22:58:32.620022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.883 [2024-04-15 22:58:32.620367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.883 [2024-04-15 22:58:32.620377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.883 [2024-04-15 22:58:32.620385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.883 [2024-04-15 22:58:32.620529] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.883 [2024-04-15 22:58:32.620715] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.883 [2024-04-15 22:58:32.620724] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.883 [2024-04-15 22:58:32.620730] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.883 [2024-04-15 22:58:32.623230] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.883 [2024-04-15 22:58:32.632039] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.883 [2024-04-15 22:58:32.632579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.883 [2024-04-15 22:58:32.632806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.883 [2024-04-15 22:58:32.632816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.883 [2024-04-15 22:58:32.632823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.883 [2024-04-15 22:58:32.632952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.883 [2024-04-15 22:58:32.633116] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.883 [2024-04-15 22:58:32.633124] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.883 [2024-04-15 22:58:32.633131] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.883 [2024-04-15 22:58:32.635494] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.883 [2024-04-15 22:58:32.644526] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.883 [2024-04-15 22:58:32.645015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.883 [2024-04-15 22:58:32.645355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.883 [2024-04-15 22:58:32.645365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.883 [2024-04-15 22:58:32.645372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.883 [2024-04-15 22:58:32.645535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.883 [2024-04-15 22:58:32.645628] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.883 [2024-04-15 22:58:32.645635] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.883 [2024-04-15 22:58:32.645642] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.883 [2024-04-15 22:58:32.648096] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.883 [2024-04-15 22:58:32.657087] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.883 [2024-04-15 22:58:32.657580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.883 [2024-04-15 22:58:32.657933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.883 [2024-04-15 22:58:32.657943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.883 [2024-04-15 22:58:32.657950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.883 [2024-04-15 22:58:32.658095] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.883 [2024-04-15 22:58:32.658222] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.883 [2024-04-15 22:58:32.658229] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.883 [2024-04-15 22:58:32.658236] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.883 [2024-04-15 22:58:32.660503] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.883 [2024-04-15 22:58:32.669568] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.883 [2024-04-15 22:58:32.670164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.883 [2024-04-15 22:58:32.670534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.883 [2024-04-15 22:58:32.670555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.883 [2024-04-15 22:58:32.670565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.883 [2024-04-15 22:58:32.670691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.883 [2024-04-15 22:58:32.670843] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.883 [2024-04-15 22:58:32.670852] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.883 [2024-04-15 22:58:32.670859] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.883 [2024-04-15 22:58:32.673152] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.883 [2024-04-15 22:58:32.682205] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.883 [2024-04-15 22:58:32.682844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.883 [2024-04-15 22:58:32.683220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.883 [2024-04-15 22:58:32.683232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:47.883 [2024-04-15 22:58:32.683241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:47.883 [2024-04-15 22:58:32.683404] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:47.883 [2024-04-15 22:58:32.683533] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.883 [2024-04-15 22:58:32.683541] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.883 [2024-04-15 22:58:32.683558] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.883 [2024-04-15 22:58:32.686017] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.146 [2024-04-15 22:58:32.694641] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.146 [2024-04-15 22:58:32.695132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.146 [2024-04-15 22:58:32.695473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.146 [2024-04-15 22:58:32.695483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.146 [2024-04-15 22:58:32.695491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.146 [2024-04-15 22:58:32.695661] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.146 [2024-04-15 22:58:32.695806] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.146 [2024-04-15 22:58:32.695814] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.146 [2024-04-15 22:58:32.695821] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.146 [2024-04-15 22:58:32.698013] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.146 [2024-04-15 22:58:32.707179] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.146 [2024-04-15 22:58:32.707836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.146 [2024-04-15 22:58:32.708205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.146 [2024-04-15 22:58:32.708218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.146 [2024-04-15 22:58:32.708227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.146 [2024-04-15 22:58:32.708391] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.146 [2024-04-15 22:58:32.708520] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.146 [2024-04-15 22:58:32.708532] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.146 [2024-04-15 22:58:32.708539] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.146 [2024-04-15 22:58:32.711061] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.146 [2024-04-15 22:58:32.719700] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.146 [2024-04-15 22:58:32.720316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.146 [2024-04-15 22:58:32.720684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.146 [2024-04-15 22:58:32.720698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.146 [2024-04-15 22:58:32.720708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.146 [2024-04-15 22:58:32.720853] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.146 [2024-04-15 22:58:32.721000] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.146 [2024-04-15 22:58:32.721009] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.146 [2024-04-15 22:58:32.721016] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.146 [2024-04-15 22:58:32.723062] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.146 [2024-04-15 22:58:32.732173] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.146 [2024-04-15 22:58:32.732836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.146 [2024-04-15 22:58:32.733205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.146 [2024-04-15 22:58:32.733218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.146 [2024-04-15 22:58:32.733228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.146 [2024-04-15 22:58:32.733410] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.146 [2024-04-15 22:58:32.733538] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.146 [2024-04-15 22:58:32.733554] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.146 [2024-04-15 22:58:32.733562] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.146 [2024-04-15 22:58:32.735834] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.146 [2024-04-15 22:58:32.744753] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.146 [2024-04-15 22:58:32.745323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.146 [2024-04-15 22:58:32.745681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.146 [2024-04-15 22:58:32.745692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.146 [2024-04-15 22:58:32.745700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.146 [2024-04-15 22:58:32.745844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.146 [2024-04-15 22:58:32.745970] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.146 [2024-04-15 22:58:32.745978] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.146 [2024-04-15 22:58:32.745989] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.146 [2024-04-15 22:58:32.748205] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.146 [2024-04-15 22:58:32.757097] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.146 [2024-04-15 22:58:32.757766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.146 [2024-04-15 22:58:32.758136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.146 [2024-04-15 22:58:32.758149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.146 [2024-04-15 22:58:32.758159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.146 [2024-04-15 22:58:32.758286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.146 [2024-04-15 22:58:32.758471] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.146 [2024-04-15 22:58:32.758479] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.146 [2024-04-15 22:58:32.758486] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.146 [2024-04-15 22:58:32.760596] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.146 [2024-04-15 22:58:32.769628] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.146 [2024-04-15 22:58:32.770179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.146 [2024-04-15 22:58:32.770475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.146 [2024-04-15 22:58:32.770484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.146 [2024-04-15 22:58:32.770492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.146 [2024-04-15 22:58:32.770641] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.146 [2024-04-15 22:58:32.770767] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.146 [2024-04-15 22:58:32.770775] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.146 [2024-04-15 22:58:32.770782] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.146 [2024-04-15 22:58:32.773072] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.146 [2024-04-15 22:58:32.782084] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.146 [2024-04-15 22:58:32.782655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.146 [2024-04-15 22:58:32.783026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.146 [2024-04-15 22:58:32.783038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.146 [2024-04-15 22:58:32.783048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.146 [2024-04-15 22:58:32.783155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.146 [2024-04-15 22:58:32.783302] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.147 [2024-04-15 22:58:32.783310] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.147 [2024-04-15 22:58:32.783317] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.147 [2024-04-15 22:58:32.785636] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.147 [2024-04-15 22:58:32.794606] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.147 [2024-04-15 22:58:32.795106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.147 [2024-04-15 22:58:32.795481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.147 [2024-04-15 22:58:32.795491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.147 [2024-04-15 22:58:32.795499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.147 [2024-04-15 22:58:32.795648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.147 [2024-04-15 22:58:32.795811] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.147 [2024-04-15 22:58:32.795819] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.147 [2024-04-15 22:58:32.795826] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.147 [2024-04-15 22:58:32.798092] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.147 [2024-04-15 22:58:32.807231] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.147 [2024-04-15 22:58:32.807847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.147 [2024-04-15 22:58:32.808219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.147 [2024-04-15 22:58:32.808232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.147 [2024-04-15 22:58:32.808241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.147 [2024-04-15 22:58:32.808405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.147 [2024-04-15 22:58:32.808533] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.147 [2024-04-15 22:58:32.808548] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.147 [2024-04-15 22:58:32.808556] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.147 [2024-04-15 22:58:32.810772] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.147 [2024-04-15 22:58:32.819714] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.147 [2024-04-15 22:58:32.820210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.147 [2024-04-15 22:58:32.820555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.147 [2024-04-15 22:58:32.820566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.147 [2024-04-15 22:58:32.820573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.147 [2024-04-15 22:58:32.820699] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.147 [2024-04-15 22:58:32.820862] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.147 [2024-04-15 22:58:32.820870] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.147 [2024-04-15 22:58:32.820876] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.147 [2024-04-15 22:58:32.823225] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.147 [2024-04-15 22:58:32.832220] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.147 [2024-04-15 22:58:32.832857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.147 [2024-04-15 22:58:32.833253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.147 [2024-04-15 22:58:32.833266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.147 [2024-04-15 22:58:32.833275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.147 [2024-04-15 22:58:32.833439] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.147 [2024-04-15 22:58:32.833575] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.147 [2024-04-15 22:58:32.833584] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.147 [2024-04-15 22:58:32.833592] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.147 [2024-04-15 22:58:32.835810] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.147 [2024-04-15 22:58:32.844648] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.147 [2024-04-15 22:58:32.845044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.147 [2024-04-15 22:58:32.845469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.147 [2024-04-15 22:58:32.845481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.147 [2024-04-15 22:58:32.845490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.147 [2024-04-15 22:58:32.845607] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.147 [2024-04-15 22:58:32.845774] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.147 [2024-04-15 22:58:32.845782] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.147 [2024-04-15 22:58:32.845790] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.147 [2024-04-15 22:58:32.848191] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.147 [2024-04-15 22:58:32.857215] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.147 [2024-04-15 22:58:32.857838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.147 [2024-04-15 22:58:32.858209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.147 [2024-04-15 22:58:32.858221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.147 [2024-04-15 22:58:32.858230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.147 [2024-04-15 22:58:32.858375] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.147 [2024-04-15 22:58:32.858504] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.147 [2024-04-15 22:58:32.858512] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.147 [2024-04-15 22:58:32.858520] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.147 [2024-04-15 22:58:32.860892] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.147 [2024-04-15 22:58:32.869700] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.147 [2024-04-15 22:58:32.870335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.147 [2024-04-15 22:58:32.870793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.147 [2024-04-15 22:58:32.870830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.147 [2024-04-15 22:58:32.870840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.147 [2024-04-15 22:58:32.871022] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.147 [2024-04-15 22:58:32.871207] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.147 [2024-04-15 22:58:32.871216] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.147 [2024-04-15 22:58:32.871223] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.147 [2024-04-15 22:58:32.873502] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.147 [2024-04-15 22:58:32.882065] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.147 [2024-04-15 22:58:32.882514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.147 [2024-04-15 22:58:32.882941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.147 [2024-04-15 22:58:32.882955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.147 [2024-04-15 22:58:32.882965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.147 [2024-04-15 22:58:32.883166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.147 [2024-04-15 22:58:32.883351] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.147 [2024-04-15 22:58:32.883359] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.147 [2024-04-15 22:58:32.883367] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.147 [2024-04-15 22:58:32.885642] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.147 [2024-04-15 22:58:32.894382] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.147 [2024-04-15 22:58:32.894970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.147 [2024-04-15 22:58:32.895352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.147 [2024-04-15 22:58:32.895362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.147 [2024-04-15 22:58:32.895369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.147 [2024-04-15 22:58:32.895495] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.147 [2024-04-15 22:58:32.895683] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.147 [2024-04-15 22:58:32.895691] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.147 [2024-04-15 22:58:32.895698] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.147 [2024-04-15 22:58:32.898054] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.147 [2024-04-15 22:58:32.906914] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.147 [2024-04-15 22:58:32.907466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.148 [2024-04-15 22:58:32.907796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.148 [2024-04-15 22:58:32.907809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.148 [2024-04-15 22:58:32.907823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.148 [2024-04-15 22:58:32.907987] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.148 [2024-04-15 22:58:32.908116] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.148 [2024-04-15 22:58:32.908125] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.148 [2024-04-15 22:58:32.908132] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.148 [2024-04-15 22:58:32.910427] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.148 [2024-04-15 22:58:32.919617] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.148 [2024-04-15 22:58:32.920113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.148 [2024-04-15 22:58:32.920479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.148 [2024-04-15 22:58:32.920489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.148 [2024-04-15 22:58:32.920496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.148 [2024-04-15 22:58:32.920645] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.148 [2024-04-15 22:58:32.920771] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.148 [2024-04-15 22:58:32.920779] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.148 [2024-04-15 22:58:32.920786] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.148 [2024-04-15 22:58:32.922838] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.148 [2024-04-15 22:58:32.932226] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.148 [2024-04-15 22:58:32.932884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.148 [2024-04-15 22:58:32.933260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.148 [2024-04-15 22:58:32.933272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.148 [2024-04-15 22:58:32.933282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.148 [2024-04-15 22:58:32.933483] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.148 [2024-04-15 22:58:32.933638] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.148 [2024-04-15 22:58:32.933647] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.148 [2024-04-15 22:58:32.933655] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.148 [2024-04-15 22:58:32.936074] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.148 [2024-04-15 22:58:32.944659] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.148 [2024-04-15 22:58:32.945283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.148 [2024-04-15 22:58:32.945659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.148 [2024-04-15 22:58:32.945673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.148 [2024-04-15 22:58:32.945682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.148 [2024-04-15 22:58:32.945812] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.148 [2024-04-15 22:58:32.945922] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.148 [2024-04-15 22:58:32.945930] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.148 [2024-04-15 22:58:32.945938] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.148 [2024-04-15 22:58:32.948121] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.410 [2024-04-15 22:58:32.957193] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.410 [2024-04-15 22:58:32.957826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.410 [2024-04-15 22:58:32.958195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.410 [2024-04-15 22:58:32.958207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.410 [2024-04-15 22:58:32.958217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.410 [2024-04-15 22:58:32.958399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.410 [2024-04-15 22:58:32.958610] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.410 [2024-04-15 22:58:32.958619] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.410 [2024-04-15 22:58:32.958626] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.410 [2024-04-15 22:58:32.960862] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.410 [2024-04-15 22:58:32.969666] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.410 [2024-04-15 22:58:32.970182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.411 [2024-04-15 22:58:32.970555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.411 [2024-04-15 22:58:32.970567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.411 [2024-04-15 22:58:32.970577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.411 [2024-04-15 22:58:32.970722] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.411 [2024-04-15 22:58:32.970869] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.411 [2024-04-15 22:58:32.970877] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.411 [2024-04-15 22:58:32.970885] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.411 [2024-04-15 22:58:32.973067] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.411 [2024-04-15 22:58:32.982348] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.411 [2024-04-15 22:58:32.982806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.411 [2024-04-15 22:58:32.983185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.411 [2024-04-15 22:58:32.983197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.411 [2024-04-15 22:58:32.983206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.411 [2024-04-15 22:58:32.983388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.411 [2024-04-15 22:58:32.983564] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.411 [2024-04-15 22:58:32.983573] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.411 [2024-04-15 22:58:32.983581] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.411 [2024-04-15 22:58:32.985815] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.411 [2024-04-15 22:58:32.994753] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.411 [2024-04-15 22:58:32.995234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.411 [2024-04-15 22:58:32.995602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.411 [2024-04-15 22:58:32.995613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.411 [2024-04-15 22:58:32.995620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.411 [2024-04-15 22:58:32.995765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.411 [2024-04-15 22:58:32.995890] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.411 [2024-04-15 22:58:32.995898] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.411 [2024-04-15 22:58:32.995904] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.411 [2024-04-15 22:58:32.998229] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.411 [2024-04-15 22:58:33.007347] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.411 [2024-04-15 22:58:33.007718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.411 [2024-04-15 22:58:33.008132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.411 [2024-04-15 22:58:33.008144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.411 [2024-04-15 22:58:33.008154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.411 [2024-04-15 22:58:33.008354] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.411 [2024-04-15 22:58:33.008521] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.411 [2024-04-15 22:58:33.008530] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.411 [2024-04-15 22:58:33.008537] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.411 [2024-04-15 22:58:33.011058] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.411 [2024-04-15 22:58:33.019804] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.411 [2024-04-15 22:58:33.020383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.411 [2024-04-15 22:58:33.020765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.411 [2024-04-15 22:58:33.020779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.411 [2024-04-15 22:58:33.020788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.411 [2024-04-15 22:58:33.020970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.411 [2024-04-15 22:58:33.021118] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.411 [2024-04-15 22:58:33.021130] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.411 [2024-04-15 22:58:33.021138] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.411 [2024-04-15 22:58:33.023532] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.411 [2024-04-15 22:58:33.032263] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.411 [2024-04-15 22:58:33.032800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.411 [2024-04-15 22:58:33.033167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.411 [2024-04-15 22:58:33.033177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.411 [2024-04-15 22:58:33.033185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.411 [2024-04-15 22:58:33.033329] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.411 [2024-04-15 22:58:33.033473] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.411 [2024-04-15 22:58:33.033481] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.411 [2024-04-15 22:58:33.033487] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.411 [2024-04-15 22:58:33.035832] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.411 [2024-04-15 22:58:33.044640] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.411 [2024-04-15 22:58:33.045228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.411 [2024-04-15 22:58:33.045584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.411 [2024-04-15 22:58:33.045597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.411 [2024-04-15 22:58:33.045607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.411 [2024-04-15 22:58:33.045789] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.411 [2024-04-15 22:58:33.045937] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.411 [2024-04-15 22:58:33.045945] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.411 [2024-04-15 22:58:33.045953] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.411 [2024-04-15 22:58:33.048095] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.411 [2024-04-15 22:58:33.057292] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.411 [2024-04-15 22:58:33.057875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.411 [2024-04-15 22:58:33.058260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.411 [2024-04-15 22:58:33.058273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.411 [2024-04-15 22:58:33.058282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.411 [2024-04-15 22:58:33.058427] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.411 [2024-04-15 22:58:33.058621] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.411 [2024-04-15 22:58:33.058630] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.411 [2024-04-15 22:58:33.058642] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.411 [2024-04-15 22:58:33.060988] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1329432 Killed "${NVMF_APP[@]}" "$@" 00:31:48.411 [2024-04-15 22:58:33.069789] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.411 22:58:33 -- host/bdevperf.sh@36 -- # tgt_init 00:31:48.411 22:58:33 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:48.411 [2024-04-15 22:58:33.070475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.411 22:58:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:48.411 22:58:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:48.411 [2024-04-15 22:58:33.070918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.411 [2024-04-15 22:58:33.070932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.411 [2024-04-15 22:58:33.070941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.411 22:58:33 -- common/autotest_common.sh@10 -- # set +x 00:31:48.411 [2024-04-15 22:58:33.071123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.411 [2024-04-15 22:58:33.071252] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.411 [2024-04-15 22:58:33.071260] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.411 [2024-04-15 22:58:33.071268] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.411 [2024-04-15 22:58:33.073504] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.411 22:58:33 -- nvmf/common.sh@469 -- # nvmfpid=1331158 00:31:48.411 22:58:33 -- nvmf/common.sh@470 -- # waitforlisten 1331158 00:31:48.411 22:58:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:48.411 22:58:33 -- common/autotest_common.sh@819 -- # '[' -z 1331158 ']' 00:31:48.412 22:58:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.412 22:58:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:48.412 22:58:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.412 22:58:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:48.412 22:58:33 -- common/autotest_common.sh@10 -- # set +x 00:31:48.412 [2024-04-15 22:58:33.082332] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.412 [2024-04-15 22:58:33.082928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.412 [2024-04-15 22:58:33.083351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.412 [2024-04-15 22:58:33.083365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.412 [2024-04-15 22:58:33.083375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.412 [2024-04-15 22:58:33.083565] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.412 [2024-04-15 22:58:33.083695] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.412 [2024-04-15 22:58:33.083704] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.412 [2024-04-15 22:58:33.083711] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.412 [2024-04-15 22:58:33.086161] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.412 [2024-04-15 22:58:33.094865] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.412 [2024-04-15 22:58:33.095493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.412 [2024-04-15 22:58:33.095906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.412 [2024-04-15 22:58:33.095921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.412 [2024-04-15 22:58:33.095930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.412 [2024-04-15 22:58:33.096075] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.412 [2024-04-15 22:58:33.096206] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.412 [2024-04-15 22:58:33.096214] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.412 [2024-04-15 22:58:33.096222] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.412 [2024-04-15 22:58:33.098349] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.412 [2024-04-15 22:58:33.107355] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.412 [2024-04-15 22:58:33.107818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.412 [2024-04-15 22:58:33.108186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.412 [2024-04-15 22:58:33.108198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.412 [2024-04-15 22:58:33.108208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.412 [2024-04-15 22:58:33.108335] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.412 [2024-04-15 22:58:33.108464] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.412 [2024-04-15 22:58:33.108472] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.412 [2024-04-15 22:58:33.108479] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.412 [2024-04-15 22:58:33.110757] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.412 [2024-04-15 22:58:33.119899] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.412 [2024-04-15 22:58:33.120526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.412 [2024-04-15 22:58:33.120918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.412 [2024-04-15 22:58:33.120932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.412 [2024-04-15 22:58:33.120941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.412 [2024-04-15 22:58:33.121105] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.412 [2024-04-15 22:58:33.121253] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.412 [2024-04-15 22:58:33.121261] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.412 [2024-04-15 22:58:33.121269] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.412 [2024-04-15 22:58:33.122693] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:31:48.412 [2024-04-15 22:58:33.122737] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:48.412 [2024-04-15 22:58:33.123440] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.412 [2024-04-15 22:58:33.132268] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.412 [2024-04-15 22:58:33.132866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.412 [2024-04-15 22:58:33.133236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.412 [2024-04-15 22:58:33.133248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.412 [2024-04-15 22:58:33.133258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.412 [2024-04-15 22:58:33.133403] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.412 [2024-04-15 22:58:33.133559] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.412 [2024-04-15 22:58:33.133568] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.412 [2024-04-15 22:58:33.133575] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.412 [2024-04-15 22:58:33.135846] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.412 [2024-04-15 22:58:33.144799] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.412 [2024-04-15 22:58:33.145437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.412 [2024-04-15 22:58:33.145808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.412 [2024-04-15 22:58:33.145822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.412 [2024-04-15 22:58:33.145832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.412 [2024-04-15 22:58:33.146014] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.412 [2024-04-15 22:58:33.146124] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.412 [2024-04-15 22:58:33.146133] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.412 [2024-04-15 22:58:33.146140] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.412 [2024-04-15 22:58:33.148456] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.412 [2024-04-15 22:58:33.157250] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.412 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.412 [2024-04-15 22:58:33.157784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.412 [2024-04-15 22:58:33.158133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.412 [2024-04-15 22:58:33.158143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.412 [2024-04-15 22:58:33.158151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.412 [2024-04-15 22:58:33.158314] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.412 [2024-04-15 22:58:33.158459] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.412 [2024-04-15 22:58:33.158468] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.412 [2024-04-15 22:58:33.158475] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.412 [2024-04-15 22:58:33.160898] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.412 [2024-04-15 22:58:33.169606] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.412 [2024-04-15 22:58:33.169987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.412 [2024-04-15 22:58:33.170336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.412 [2024-04-15 22:58:33.170346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.412 [2024-04-15 22:58:33.170353] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.412 [2024-04-15 22:58:33.170515] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.412 [2024-04-15 22:58:33.170682] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.412 [2024-04-15 22:58:33.170690] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.412 [2024-04-15 22:58:33.170698] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.412 [2024-04-15 22:58:33.172980] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.412 [2024-04-15 22:58:33.182054] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.412 [2024-04-15 22:58:33.182710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.412 [2024-04-15 22:58:33.183079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.412 [2024-04-15 22:58:33.183092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.412 [2024-04-15 22:58:33.183102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.412 [2024-04-15 22:58:33.183247] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.412 [2024-04-15 22:58:33.183395] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.413 [2024-04-15 22:58:33.183404] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.413 [2024-04-15 22:58:33.183411] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.413 [2024-04-15 22:58:33.185782] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.413 [2024-04-15 22:58:33.193982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:48.413 [2024-04-15 22:58:33.194546] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.413 [2024-04-15 22:58:33.195034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.413 [2024-04-15 22:58:33.195377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.413 [2024-04-15 22:58:33.195387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.413 [2024-04-15 22:58:33.195395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.413 [2024-04-15 22:58:33.195540] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.413 [2024-04-15 22:58:33.195672] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.413 [2024-04-15 22:58:33.195680] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.413 [2024-04-15 22:58:33.195687] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.413 [2024-04-15 22:58:33.198032] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.413 [2024-04-15 22:58:33.206968] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.413 [2024-04-15 22:58:33.207466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.413 [2024-04-15 22:58:33.207894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.413 [2024-04-15 22:58:33.207904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.413 [2024-04-15 22:58:33.207912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.413 [2024-04-15 22:58:33.208037] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.413 [2024-04-15 22:58:33.208163] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.413 [2024-04-15 22:58:33.208171] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.413 [2024-04-15 22:58:33.208179] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.413 [2024-04-15 22:58:33.210329] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.676 [2024-04-15 22:58:33.219382] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.676 [2024-04-15 22:58:33.219880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.676 [2024-04-15 22:58:33.220227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.676 [2024-04-15 22:58:33.220237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.676 [2024-04-15 22:58:33.220244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.676 [2024-04-15 22:58:33.220390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.676 [2024-04-15 22:58:33.220574] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.676 [2024-04-15 22:58:33.220582] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.676 [2024-04-15 22:58:33.220589] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.676 [2024-04-15 22:58:33.223013] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.676 [2024-04-15 22:58:33.231823] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.676 [2024-04-15 22:58:33.232291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.676 [2024-04-15 22:58:33.232578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.676 [2024-04-15 22:58:33.232588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.676 [2024-04-15 22:58:33.232596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.676 [2024-04-15 22:58:33.232704] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.676 [2024-04-15 22:58:33.232848] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.676 [2024-04-15 22:58:33.232857] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.676 [2024-04-15 22:58:33.232864] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.676 [2024-04-15 22:58:33.235352] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.676 [2024-04-15 22:58:33.244464] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.676 [2024-04-15 22:58:33.245063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.676 [2024-04-15 22:58:33.245434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.676 [2024-04-15 22:58:33.245447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.676 [2024-04-15 22:58:33.245456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.676 [2024-04-15 22:58:33.245630] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.676 [2024-04-15 22:58:33.245779] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.676 [2024-04-15 22:58:33.245786] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.676 [2024-04-15 22:58:33.245794] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.676 [2024-04-15 22:58:33.248231] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.676 [2024-04-15 22:58:33.256754] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:48.676 [2024-04-15 22:58:33.256866] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:48.676 [2024-04-15 22:58:33.256874] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:48.676 [2024-04-15 22:58:33.256881] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:48.676 [2024-04-15 22:58:33.257018] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.676 [2024-04-15 22:58:33.256996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:48.676 [2024-04-15 22:58:33.257131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.676 [2024-04-15 22:58:33.257131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:48.676 [2024-04-15 22:58:33.257384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.676 [2024-04-15 22:58:33.257757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.676 [2024-04-15 22:58:33.257769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.676 [2024-04-15 22:58:33.257778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.676 [2024-04-15 22:58:33.258002] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.676 [2024-04-15 22:58:33.258185] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.677 [2024-04-15 22:58:33.258193] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.677 [2024-04-15 22:58:33.258200] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.677 [2024-04-15 22:58:33.260284] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.677 [2024-04-15 22:58:33.269598] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.677 [2024-04-15 22:58:33.270286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.677 [2024-04-15 22:58:33.270676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.677 [2024-04-15 22:58:33.270691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.677 [2024-04-15 22:58:33.270701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.677 [2024-04-15 22:58:33.270848] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.677 [2024-04-15 22:58:33.271018] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.677 [2024-04-15 22:58:33.271027] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.677 [2024-04-15 22:58:33.271035] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.677 [2024-04-15 22:58:33.273105] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.677 [2024-04-15 22:58:33.282111] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.677 [2024-04-15 22:58:33.282682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.677 [2024-04-15 22:58:33.283120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.677 [2024-04-15 22:58:33.283133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.677 [2024-04-15 22:58:33.283143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.677 [2024-04-15 22:58:33.283308] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.677 [2024-04-15 22:58:33.283474] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.677 [2024-04-15 22:58:33.283483] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.677 [2024-04-15 22:58:33.283491] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.677 [2024-04-15 22:58:33.285731] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.677 [2024-04-15 22:58:33.294560] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.677 [2024-04-15 22:58:33.295215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.677 [2024-04-15 22:58:33.295598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.677 [2024-04-15 22:58:33.295612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.677 [2024-04-15 22:58:33.295622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.677 [2024-04-15 22:58:33.295787] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.677 [2024-04-15 22:58:33.295972] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.677 [2024-04-15 22:58:33.295980] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.677 [2024-04-15 22:58:33.295988] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.677 [2024-04-15 22:58:33.298208] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.677 [2024-04-15 22:58:33.307120] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.677 [2024-04-15 22:58:33.307736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.677 [2024-04-15 22:58:33.308019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.677 [2024-04-15 22:58:33.308031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.677 [2024-04-15 22:58:33.308041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.677 [2024-04-15 22:58:33.308168] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.677 [2024-04-15 22:58:33.308334] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.677 [2024-04-15 22:58:33.308348] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.677 [2024-04-15 22:58:33.308356] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.677 [2024-04-15 22:58:33.310650] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.677 [2024-04-15 22:58:33.319677] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.677 [2024-04-15 22:58:33.320312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.677 [2024-04-15 22:58:33.320628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.677 [2024-04-15 22:58:33.320642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.677 [2024-04-15 22:58:33.320652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.677 [2024-04-15 22:58:33.320873] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.677 [2024-04-15 22:58:33.321077] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.677 [2024-04-15 22:58:33.321086] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.677 [2024-04-15 22:58:33.321093] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.677 [2024-04-15 22:58:33.323492] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.677 [2024-04-15 22:58:33.332025] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.677 [2024-04-15 22:58:33.332655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.677 [2024-04-15 22:58:33.333037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.677 [2024-04-15 22:58:33.333049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.677 [2024-04-15 22:58:33.333059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.677 [2024-04-15 22:58:33.333222] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.677 [2024-04-15 22:58:33.333407] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.677 [2024-04-15 22:58:33.333416] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.677 [2024-04-15 22:58:33.333423] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.677 [2024-04-15 22:58:33.335643] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.677 [2024-04-15 22:58:33.344633] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.677 [2024-04-15 22:58:33.345225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.677 [2024-04-15 22:58:33.345458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.677 [2024-04-15 22:58:33.345470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.677 [2024-04-15 22:58:33.345480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.677 [2024-04-15 22:58:33.345632] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.677 [2024-04-15 22:58:33.345800] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.677 [2024-04-15 22:58:33.345808] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.677 [2024-04-15 22:58:33.345820] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.677 [2024-04-15 22:58:33.348275] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.677 [2024-04-15 22:58:33.357085] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.677 [2024-04-15 22:58:33.357782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.677 [2024-04-15 22:58:33.357913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.677 [2024-04-15 22:58:33.357924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.677 [2024-04-15 22:58:33.357934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.677 [2024-04-15 22:58:33.358135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.677 [2024-04-15 22:58:33.358302] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.677 [2024-04-15 22:58:33.358310] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.677 [2024-04-15 22:58:33.358318] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.677 [2024-04-15 22:58:33.360499] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.677 [2024-04-15 22:58:33.369716] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.677 [2024-04-15 22:58:33.370364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.677 [2024-04-15 22:58:33.370753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.677 [2024-04-15 22:58:33.370768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.677 [2024-04-15 22:58:33.370777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.677 [2024-04-15 22:58:33.370941] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.677 [2024-04-15 22:58:33.371033] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.677 [2024-04-15 22:58:33.371041] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.677 [2024-04-15 22:58:33.371048] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.677 [2024-04-15 22:58:33.373099] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.677 [2024-04-15 22:58:33.382038] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.677 [2024-04-15 22:58:33.382628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.678 [2024-04-15 22:58:33.383082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.678 [2024-04-15 22:58:33.383094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.678 [2024-04-15 22:58:33.383104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.678 [2024-04-15 22:58:33.383286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.678 [2024-04-15 22:58:33.383471] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.678 [2024-04-15 22:58:33.383479] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.678 [2024-04-15 22:58:33.383487] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.678 [2024-04-15 22:58:33.385545] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.678 [2024-04-15 22:58:33.394603] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.678 [2024-04-15 22:58:33.395095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.678 [2024-04-15 22:58:33.395362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.678 [2024-04-15 22:58:33.395372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.678 [2024-04-15 22:58:33.395380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.678 [2024-04-15 22:58:33.395585] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.678 [2024-04-15 22:58:33.395749] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.678 [2024-04-15 22:58:33.395757] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.678 [2024-04-15 22:58:33.395764] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.678 [2024-04-15 22:58:33.398012] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.678 [2024-04-15 22:58:33.407178] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.678 [2024-04-15 22:58:33.407632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.678 [2024-04-15 22:58:33.407977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.678 [2024-04-15 22:58:33.407987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.678 [2024-04-15 22:58:33.407995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.678 [2024-04-15 22:58:33.408140] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.678 [2024-04-15 22:58:33.408284] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.678 [2024-04-15 22:58:33.408292] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.678 [2024-04-15 22:58:33.408299] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.678 [2024-04-15 22:58:33.410564] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.678 [2024-04-15 22:58:33.419561] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.678 [2024-04-15 22:58:33.419915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.678 [2024-04-15 22:58:33.420283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.678 [2024-04-15 22:58:33.420293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.678 [2024-04-15 22:58:33.420300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.678 [2024-04-15 22:58:33.420427] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.678 [2024-04-15 22:58:33.420515] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.678 [2024-04-15 22:58:33.420522] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.678 [2024-04-15 22:58:33.420529] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.678 [2024-04-15 22:58:33.422643] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.678 [2024-04-15 22:58:33.432191] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.678 [2024-04-15 22:58:33.432848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.678 [2024-04-15 22:58:33.433077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.678 [2024-04-15 22:58:33.433090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.678 [2024-04-15 22:58:33.433100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.678 [2024-04-15 22:58:33.433282] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.678 [2024-04-15 22:58:33.433431] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.678 [2024-04-15 22:58:33.433439] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.678 [2024-04-15 22:58:33.433447] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.678 [2024-04-15 22:58:33.435633] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.678 [2024-04-15 22:58:33.444667] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.678 [2024-04-15 22:58:33.445261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.678 [2024-04-15 22:58:33.445650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.678 [2024-04-15 22:58:33.445664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.678 [2024-04-15 22:58:33.445674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.678 [2024-04-15 22:58:33.445894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.678 [2024-04-15 22:58:33.446079] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.678 [2024-04-15 22:58:33.446088] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.678 [2024-04-15 22:58:33.446095] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.678 [2024-04-15 22:58:33.448330] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.678 [2024-04-15 22:58:33.457204] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.678 [2024-04-15 22:58:33.457809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.678 [2024-04-15 22:58:33.458172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.678 [2024-04-15 22:58:33.458185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.678 [2024-04-15 22:58:33.458195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.678 [2024-04-15 22:58:33.458339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.678 [2024-04-15 22:58:33.458468] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.678 [2024-04-15 22:58:33.458476] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.678 [2024-04-15 22:58:33.458483] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.678 [2024-04-15 22:58:33.460721] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.678 [2024-04-15 22:58:33.469685] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.678 [2024-04-15 22:58:33.470179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.678 [2024-04-15 22:58:33.470621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.678 [2024-04-15 22:58:33.470635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.678 [2024-04-15 22:58:33.470645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.678 [2024-04-15 22:58:33.470828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.678 [2024-04-15 22:58:33.470994] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.678 [2024-04-15 22:58:33.471002] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.678 [2024-04-15 22:58:33.471010] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.678 [2024-04-15 22:58:33.473265] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.678 [2024-04-15 22:58:33.482312] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.678 [2024-04-15 22:58:33.482803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.941 [2024-04-15 22:58:33.483180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.941 [2024-04-15 22:58:33.483193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.941 [2024-04-15 22:58:33.483203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.941 [2024-04-15 22:58:33.483330] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.941 [2024-04-15 22:58:33.483459] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.941 [2024-04-15 22:58:33.483467] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.941 [2024-04-15 22:58:33.483474] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.941 [2024-04-15 22:58:33.485790] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.941 [2024-04-15 22:58:33.494755] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.941 [2024-04-15 22:58:33.495336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.941 [2024-04-15 22:58:33.495716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.941 [2024-04-15 22:58:33.495730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.941 [2024-04-15 22:58:33.495740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.941 [2024-04-15 22:58:33.495903] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.941 [2024-04-15 22:58:33.496051] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.941 [2024-04-15 22:58:33.496059] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.941 [2024-04-15 22:58:33.496067] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.941 [2024-04-15 22:58:33.498340] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.941 [2024-04-15 22:58:33.507278] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.941 [2024-04-15 22:58:33.507981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.941 [2024-04-15 22:58:33.508358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.941 [2024-04-15 22:58:33.508375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.941 [2024-04-15 22:58:33.508384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.941 [2024-04-15 22:58:33.508556] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.941 [2024-04-15 22:58:33.508704] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.941 [2024-04-15 22:58:33.508712] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.941 [2024-04-15 22:58:33.508719] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.941 [2024-04-15 22:58:33.510804] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.941 [2024-04-15 22:58:33.519604] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.941 [2024-04-15 22:58:33.520071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.941 [2024-04-15 22:58:33.520420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.941 [2024-04-15 22:58:33.520429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.941 [2024-04-15 22:58:33.520437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.941 [2024-04-15 22:58:33.520550] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.941 [2024-04-15 22:58:33.520714] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.941 [2024-04-15 22:58:33.520721] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.941 [2024-04-15 22:58:33.520728] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.941 [2024-04-15 22:58:33.523036] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.941 [2024-04-15 22:58:33.531913] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.941 [2024-04-15 22:58:33.532490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.941 [2024-04-15 22:58:33.532889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.941 [2024-04-15 22:58:33.532899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.941 [2024-04-15 22:58:33.532906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.941 [2024-04-15 22:58:33.533069] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.941 [2024-04-15 22:58:33.533250] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.941 [2024-04-15 22:58:33.533257] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.941 [2024-04-15 22:58:33.533264] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.941 [2024-04-15 22:58:33.535585] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.941 [2024-04-15 22:58:33.544384] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.941 [2024-04-15 22:58:33.545052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.941 [2024-04-15 22:58:33.545434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.941 [2024-04-15 22:58:33.545446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.941 [2024-04-15 22:58:33.545460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.941 [2024-04-15 22:58:33.545649] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.941 [2024-04-15 22:58:33.545779] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.941 [2024-04-15 22:58:33.545787] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.941 [2024-04-15 22:58:33.545794] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.941 [2024-04-15 22:58:33.548199] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.941 [2024-04-15 22:58:33.556953] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.941 [2024-04-15 22:58:33.557605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.941 [2024-04-15 22:58:33.557865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.941 [2024-04-15 22:58:33.557877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.941 [2024-04-15 22:58:33.557886] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.941 [2024-04-15 22:58:33.558050] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.941 [2024-04-15 22:58:33.558216] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.941 [2024-04-15 22:58:33.558224] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.941 [2024-04-15 22:58:33.558232] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.941 [2024-04-15 22:58:33.560507] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.941 [2024-04-15 22:58:33.569435] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.941 [2024-04-15 22:58:33.569981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.941 [2024-04-15 22:58:33.570207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.941 [2024-04-15 22:58:33.570220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.941 [2024-04-15 22:58:33.570229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.941 [2024-04-15 22:58:33.570374] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.941 [2024-04-15 22:58:33.570568] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.941 [2024-04-15 22:58:33.570576] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.941 [2024-04-15 22:58:33.570584] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.941 [2024-04-15 22:58:33.572742] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.941 [2024-04-15 22:58:33.581904] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.941 [2024-04-15 22:58:33.582439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.941 [2024-04-15 22:58:33.582806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.941 [2024-04-15 22:58:33.582816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.941 [2024-04-15 22:58:33.582823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.941 [2024-04-15 22:58:33.582936] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.941 [2024-04-15 22:58:33.583024] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.941 [2024-04-15 22:58:33.583031] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.941 [2024-04-15 22:58:33.583038] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.941 [2024-04-15 22:58:33.585358] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.942 [2024-04-15 22:58:33.594304] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.942 [2024-04-15 22:58:33.594685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.594939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.594948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.942 [2024-04-15 22:58:33.594955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.942 [2024-04-15 22:58:33.595136] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.942 [2024-04-15 22:58:33.595280] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.942 [2024-04-15 22:58:33.595287] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.942 [2024-04-15 22:58:33.595294] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.942 [2024-04-15 22:58:33.597635] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.942 [2024-04-15 22:58:33.606928] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.942 [2024-04-15 22:58:33.607429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.607627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.607637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.942 [2024-04-15 22:58:33.607645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.942 [2024-04-15 22:58:33.607751] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.942 [2024-04-15 22:58:33.607895] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.942 [2024-04-15 22:58:33.607904] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.942 [2024-04-15 22:58:33.607911] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.942 [2024-04-15 22:58:33.610099] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.942 [2024-04-15 22:58:33.619410] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.942 [2024-04-15 22:58:33.619930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.620257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.620266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.942 [2024-04-15 22:58:33.620273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.942 [2024-04-15 22:58:33.620417] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.942 [2024-04-15 22:58:33.620605] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.942 [2024-04-15 22:58:33.620614] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.942 [2024-04-15 22:58:33.620621] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.942 [2024-04-15 22:58:33.622873] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.942 [2024-04-15 22:58:33.631833] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.942 [2024-04-15 22:58:33.632385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.632733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.632743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.942 [2024-04-15 22:58:33.632751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.942 [2024-04-15 22:58:33.632894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.942 [2024-04-15 22:58:33.633038] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.942 [2024-04-15 22:58:33.633046] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.942 [2024-04-15 22:58:33.633052] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.942 [2024-04-15 22:58:33.635332] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.942 [2024-04-15 22:58:33.644533] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.942 [2024-04-15 22:58:33.645090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.645521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.645530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.942 [2024-04-15 22:58:33.645538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.942 [2024-04-15 22:58:33.645705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.942 [2024-04-15 22:58:33.645830] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.942 [2024-04-15 22:58:33.645837] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.942 [2024-04-15 22:58:33.645844] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.942 [2024-04-15 22:58:33.648088] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.942 [2024-04-15 22:58:33.657036] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.942 [2024-04-15 22:58:33.657641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.658021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.658033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.942 [2024-04-15 22:58:33.658042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.942 [2024-04-15 22:58:33.658225] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.942 [2024-04-15 22:58:33.658372] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.942 [2024-04-15 22:58:33.658385] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.942 [2024-04-15 22:58:33.658392] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.942 [2024-04-15 22:58:33.660764] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.942 [2024-04-15 22:58:33.669539] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.942 [2024-04-15 22:58:33.669940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.670323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.670336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.942 [2024-04-15 22:58:33.670347] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.942 [2024-04-15 22:58:33.670530] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.942 [2024-04-15 22:58:33.670704] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.942 [2024-04-15 22:58:33.670712] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.942 [2024-04-15 22:58:33.670721] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.942 [2024-04-15 22:58:33.672827] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.942 [2024-04-15 22:58:33.681951] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.942 [2024-04-15 22:58:33.682611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.683021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.683034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.942 [2024-04-15 22:58:33.683043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.942 [2024-04-15 22:58:33.683189] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.942 [2024-04-15 22:58:33.683318] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.942 [2024-04-15 22:58:33.683326] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.942 [2024-04-15 22:58:33.683333] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.942 [2024-04-15 22:58:33.685613] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.942 [2024-04-15 22:58:33.694253] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.942 [2024-04-15 22:58:33.694686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.694948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.694961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.942 [2024-04-15 22:58:33.694971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.942 [2024-04-15 22:58:33.695172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.942 [2024-04-15 22:58:33.695283] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.942 [2024-04-15 22:58:33.695292] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.942 [2024-04-15 22:58:33.695306] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.942 [2024-04-15 22:58:33.697435] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.942 [2024-04-15 22:58:33.706778] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.942 [2024-04-15 22:58:33.707402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.707808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.707823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.942 [2024-04-15 22:58:33.707832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.942 [2024-04-15 22:58:33.707996] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.942 [2024-04-15 22:58:33.708106] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.942 [2024-04-15 22:58:33.708114] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.942 [2024-04-15 22:58:33.708122] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.942 [2024-04-15 22:58:33.710322] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.942 [2024-04-15 22:58:33.719423] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.942 [2024-04-15 22:58:33.719946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.720118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.720130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.942 [2024-04-15 22:58:33.720139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.942 [2024-04-15 22:58:33.720265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.942 [2024-04-15 22:58:33.720412] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.942 [2024-04-15 22:58:33.720421] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.942 [2024-04-15 22:58:33.720428] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.942 [2024-04-15 22:58:33.722677] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.942 [2024-04-15 22:58:33.732112] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.942 [2024-04-15 22:58:33.732501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.732888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.732899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.942 [2024-04-15 22:58:33.732907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.942 [2024-04-15 22:58:33.733014] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.942 [2024-04-15 22:58:33.733177] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.942 [2024-04-15 22:58:33.733185] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.942 [2024-04-15 22:58:33.733192] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.942 [2024-04-15 22:58:33.735383] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.942 [2024-04-15 22:58:33.744662] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.942 [2024-04-15 22:58:33.745244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.745679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.942 [2024-04-15 22:58:33.745694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:48.942 [2024-04-15 22:58:33.745703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:48.942 [2024-04-15 22:58:33.745886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:48.942 [2024-04-15 22:58:33.746015] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.942 [2024-04-15 22:58:33.746023] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.942 [2024-04-15 22:58:33.746030] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.205 [2024-04-15 22:58:33.748289] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.205 [2024-04-15 22:58:33.757044] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.205 [2024-04-15 22:58:33.757654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.205 [2024-04-15 22:58:33.757875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.205 [2024-04-15 22:58:33.757888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.205 [2024-04-15 22:58:33.757897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.205 [2024-04-15 22:58:33.758043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.205 [2024-04-15 22:58:33.758191] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.205 [2024-04-15 22:58:33.758199] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.205 [2024-04-15 22:58:33.758208] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.205 [2024-04-15 22:58:33.760503] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.205 [2024-04-15 22:58:33.769565] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.205 [2024-04-15 22:58:33.770131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.205 [2024-04-15 22:58:33.770509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.205 [2024-04-15 22:58:33.770523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.205 [2024-04-15 22:58:33.770532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.205 [2024-04-15 22:58:33.770685] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.205 [2024-04-15 22:58:33.770797] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.205 [2024-04-15 22:58:33.770805] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.205 [2024-04-15 22:58:33.770813] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.205 [2024-04-15 22:58:33.773028] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.205 [2024-04-15 22:58:33.782157] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.205 [2024-04-15 22:58:33.782844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.205 [2024-04-15 22:58:33.783224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.205 [2024-04-15 22:58:33.783237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.205 [2024-04-15 22:58:33.783246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.205 [2024-04-15 22:58:33.783448] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.205 [2024-04-15 22:58:33.783603] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.205 [2024-04-15 22:58:33.783612] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.205 [2024-04-15 22:58:33.783620] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.205 [2024-04-15 22:58:33.786023] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.205 [2024-04-15 22:58:33.794616] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.205 [2024-04-15 22:58:33.795203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.205 [2024-04-15 22:58:33.795584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.205 [2024-04-15 22:58:33.795599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.205 [2024-04-15 22:58:33.795608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.205 [2024-04-15 22:58:33.795753] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.205 [2024-04-15 22:58:33.795902] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.205 [2024-04-15 22:58:33.795910] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.205 [2024-04-15 22:58:33.795917] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.205 [2024-04-15 22:58:33.798207] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.205 [2024-04-15 22:58:33.806943] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.205 [2024-04-15 22:58:33.807445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.205 [2024-04-15 22:58:33.807612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.205 [2024-04-15 22:58:33.807623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.205 [2024-04-15 22:58:33.807630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.205 [2024-04-15 22:58:33.807793] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.205 [2024-04-15 22:58:33.807919] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.206 [2024-04-15 22:58:33.807926] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.206 [2024-04-15 22:58:33.807933] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.206 [2024-04-15 22:58:33.810253] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.206 [2024-04-15 22:58:33.819477] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.206 [2024-04-15 22:58:33.820130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.206 [2024-04-15 22:58:33.820503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.206 [2024-04-15 22:58:33.820516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.206 [2024-04-15 22:58:33.820525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.206 [2024-04-15 22:58:33.820696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.206 [2024-04-15 22:58:33.820844] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.206 [2024-04-15 22:58:33.820853] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.206 [2024-04-15 22:58:33.820860] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.206 [2024-04-15 22:58:33.823236] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.206 [2024-04-15 22:58:33.831858] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.206 [2024-04-15 22:58:33.832456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.206 [2024-04-15 22:58:33.832847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.206 [2024-04-15 22:58:33.832859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.206 [2024-04-15 22:58:33.832868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.206 [2024-04-15 22:58:33.833031] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.206 [2024-04-15 22:58:33.833179] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.206 [2024-04-15 22:58:33.833187] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.206 [2024-04-15 22:58:33.833194] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.206 [2024-04-15 22:58:33.835385] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.206 [2024-04-15 22:58:33.844311] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.206 [2024-04-15 22:58:33.844895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.206 [2024-04-15 22:58:33.845153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.206 [2024-04-15 22:58:33.845165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.206 [2024-04-15 22:58:33.845175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.206 [2024-04-15 22:58:33.845339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.206 [2024-04-15 22:58:33.845487] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.206 [2024-04-15 22:58:33.845496] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.206 [2024-04-15 22:58:33.845503] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.206 [2024-04-15 22:58:33.847706] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.206 [2024-04-15 22:58:33.856805] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.206 [2024-04-15 22:58:33.857442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.206 [2024-04-15 22:58:33.857826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.206 [2024-04-15 22:58:33.857841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.206 [2024-04-15 22:58:33.857854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.206 [2024-04-15 22:58:33.858056] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.206 [2024-04-15 22:58:33.858223] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.206 [2024-04-15 22:58:33.858267] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.206 [2024-04-15 22:58:33.858275] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.206 [2024-04-15 22:58:33.860588] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.206 [2024-04-15 22:58:33.869393] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.206 [2024-04-15 22:58:33.869905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.206 [2024-04-15 22:58:33.870380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.206 [2024-04-15 22:58:33.870392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.206 [2024-04-15 22:58:33.870401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.206 [2024-04-15 22:58:33.870573] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.206 [2024-04-15 22:58:33.870722] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.206 [2024-04-15 22:58:33.870730] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.206 [2024-04-15 22:58:33.870737] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.206 [2024-04-15 22:58:33.872970] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.206 [2024-04-15 22:58:33.881592] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.206 [2024-04-15 22:58:33.882202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.206 [2024-04-15 22:58:33.882600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.206 [2024-04-15 22:58:33.882615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.206 [2024-04-15 22:58:33.882624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.206 [2024-04-15 22:58:33.882806] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.206 [2024-04-15 22:58:33.882972] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.206 [2024-04-15 22:58:33.882980] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.206 [2024-04-15 22:58:33.882989] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.206 [2024-04-15 22:58:33.885280] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.206 [2024-04-15 22:58:33.894201] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.206 [2024-04-15 22:58:33.894697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.206 22:58:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:49.206 [2024-04-15 22:58:33.895089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.206 [2024-04-15 22:58:33.895101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.206 [2024-04-15 22:58:33.895115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.206 22:58:33 -- common/autotest_common.sh@852 -- # return 0 00:31:49.206 [2024-04-15 22:58:33.895260] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.206 [2024-04-15 22:58:33.895389] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.206 [2024-04-15 22:58:33.895397] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.206 [2024-04-15 22:58:33.895404] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.206 22:58:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:49.206 22:58:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:49.206 22:58:33 -- common/autotest_common.sh@10 -- # set +x 00:31:49.206 [2024-04-15 22:58:33.897572] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.206 [2024-04-15 22:58:33.906457] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.206 [2024-04-15 22:58:33.907131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.206 [2024-04-15 22:58:33.907410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.206 [2024-04-15 22:58:33.907423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.206 [2024-04-15 22:58:33.907432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.206 [2024-04-15 22:58:33.907605] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.206 [2024-04-15 22:58:33.907716] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.206 [2024-04-15 22:58:33.907730] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.206 [2024-04-15 22:58:33.907738] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.206 [2024-04-15 22:58:33.910064] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.206 [2024-04-15 22:58:33.918948] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.206 [2024-04-15 22:58:33.919577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.206 [2024-04-15 22:58:33.919964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.206 [2024-04-15 22:58:33.919977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.206 [2024-04-15 22:58:33.919986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.206 [2024-04-15 22:58:33.920150] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.206 [2024-04-15 22:58:33.920298] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.206 [2024-04-15 22:58:33.920306] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.206 [2024-04-15 22:58:33.920313] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.207 [2024-04-15 22:58:33.922564] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.207 [2024-04-15 22:58:33.931338] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.207 [2024-04-15 22:58:33.931886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.207 [2024-04-15 22:58:33.932113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.207 [2024-04-15 22:58:33.932124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.207 [2024-04-15 22:58:33.932137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.207 [2024-04-15 22:58:33.932263] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.207 [2024-04-15 22:58:33.932427] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.207 [2024-04-15 22:58:33.932435] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.207 [2024-04-15 22:58:33.932442] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.207 22:58:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:49.207 22:58:33 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:49.207 22:58:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.207 22:58:33 -- common/autotest_common.sh@10 -- # set +x 00:31:49.207 [2024-04-15 22:58:33.934504] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.207 [2024-04-15 22:58:33.936577] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:49.207 22:58:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.207 22:58:33 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:49.207 22:58:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.207 22:58:33 -- common/autotest_common.sh@10 -- # set +x 00:31:49.207 [2024-04-15 22:58:33.944029] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.207 [2024-04-15 22:58:33.944527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.207 [2024-04-15 22:58:33.944874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.207 [2024-04-15 22:58:33.944884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.207 [2024-04-15 22:58:33.944891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.207 [2024-04-15 22:58:33.945054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.207 [2024-04-15 22:58:33.945179] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.207 [2024-04-15 22:58:33.945187] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.207 [2024-04-15 22:58:33.945194] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.207 [2024-04-15 22:58:33.947573] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.207 [2024-04-15 22:58:33.956616] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.207 [2024-04-15 22:58:33.957025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.207 [2024-04-15 22:58:33.957404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.207 [2024-04-15 22:58:33.957417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.207 [2024-04-15 22:58:33.957426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.207 [2024-04-15 22:58:33.957599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.207 [2024-04-15 22:58:33.957710] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.207 [2024-04-15 22:58:33.957718] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.207 [2024-04-15 22:58:33.957726] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.207 [2024-04-15 22:58:33.960077] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.207 Malloc0 00:31:49.207 [2024-04-15 22:58:33.969144] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.207 22:58:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.207 [2024-04-15 22:58:33.969814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.207 22:58:33 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:49.207 [2024-04-15 22:58:33.970196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.207 [2024-04-15 22:58:33.970209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.207 [2024-04-15 22:58:33.970219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.207 22:58:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.207 [2024-04-15 22:58:33.970420] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.207 [2024-04-15 22:58:33.970558] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.207 [2024-04-15 22:58:33.970567] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.207 [2024-04-15 22:58:33.970574] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.207 22:58:33 -- common/autotest_common.sh@10 -- # set +x 00:31:49.207 [2024-04-15 22:58:33.972789] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.207 22:58:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.207 [2024-04-15 22:58:33.981694] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.207 22:58:33 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:49.207 [2024-04-15 22:58:33.982087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.207 22:58:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.207 22:58:33 -- common/autotest_common.sh@10 -- # set +x 00:31:49.207 [2024-04-15 22:58:33.982485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.207 [2024-04-15 22:58:33.982495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.207 [2024-04-15 22:58:33.982503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.207 [2024-04-15 22:58:33.982671] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.207 [2024-04-15 22:58:33.982815] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.207 [2024-04-15 22:58:33.982823] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.207 [2024-04-15 22:58:33.982830] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.207 [2024-04-15 22:58:33.985018] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.207 22:58:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.207 22:58:33 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:49.207 22:58:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.207 [2024-04-15 22:58:33.994291] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.207 22:58:33 -- common/autotest_common.sh@10 -- # set +x 00:31:49.207 [2024-04-15 22:58:33.994827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.207 [2024-04-15 22:58:33.995084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.207 [2024-04-15 22:58:33.995093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1450 with addr=10.0.0.2, port=4420 00:31:49.207 [2024-04-15 22:58:33.995106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1450 is same with the state(5) to be set 00:31:49.207 [2024-04-15 22:58:33.995269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1450 (9): Bad file descriptor 00:31:49.207 [2024-04-15 22:58:33.995413] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.207 [2024-04-15 22:58:33.995421] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.207 [2024-04-15 22:58:33.995428] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.207 [2024-04-15 22:58:33.997901] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.207 [2024-04-15 22:58:34.000705] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.207 22:58:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.207 22:58:34 -- host/bdevperf.sh@38 -- # wait 1329850 00:31:49.207 [2024-04-15 22:58:34.006636] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.469 [2024-04-15 22:58:34.036318] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:59.473 00:31:59.473 Latency(us) 00:31:59.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.473 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:59.473 Verification LBA range: start 0x0 length 0x4000 00:31:59.473 Nvme1n1 : 15.00 13796.94 53.89 14604.28 0.00 4491.97 750.93 18459.31 00:31:59.473 =================================================================================================================== 00:31:59.473 Total : 13796.94 53.89 14604.28 0.00 4491.97 750.93 18459.31 00:31:59.473 22:58:42 -- host/bdevperf.sh@39 -- # sync 00:31:59.473 22:58:42 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:59.473 22:58:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:59.473 22:58:42 -- common/autotest_common.sh@10 -- # set +x 00:31:59.473 22:58:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:59.473 22:58:42 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:59.473 22:58:42 -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:59.473 22:58:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:59.473 22:58:42 -- nvmf/common.sh@116 -- # sync 00:31:59.473 22:58:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:59.473 22:58:42 -- nvmf/common.sh@119 -- # set +e 00:31:59.473 22:58:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:59.473 22:58:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:59.473 rmmod nvme_tcp 00:31:59.473 rmmod nvme_fabrics 00:31:59.473 rmmod nvme_keyring 00:31:59.473 22:58:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:59.473 22:58:42 -- nvmf/common.sh@123 -- # set -e 00:31:59.473 22:58:42 -- nvmf/common.sh@124 -- # return 0 00:31:59.473 22:58:42 -- nvmf/common.sh@477 -- # '[' -n 1331158 ']' 00:31:59.473 22:58:42 -- nvmf/common.sh@478 -- # killprocess 1331158 00:31:59.473 22:58:42 -- common/autotest_common.sh@926 -- # '[' -z 1331158 ']' 00:31:59.473 22:58:42 -- common/autotest_common.sh@930 -- # kill -0 1331158 00:31:59.473 22:58:42 -- common/autotest_common.sh@931 -- # uname 00:31:59.473 22:58:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:59.473 22:58:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1331158 00:31:59.473 22:58:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:59.473 22:58:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:59.473 22:58:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1331158' 00:31:59.473 killing process with pid 1331158 00:31:59.473 22:58:42 -- common/autotest_common.sh@945 -- # kill 1331158 00:31:59.473 22:58:42 -- common/autotest_common.sh@950 -- # wait 1331158 00:31:59.473 22:58:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:59.473 22:58:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:59.473 22:58:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:59.473 22:58:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:59.473 22:58:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:59.473 22:58:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.473 22:58:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:59.473 22:58:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.417 22:58:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:00.417 00:32:00.417 real 0m28.533s 00:32:00.417 user 1m3.147s 00:32:00.417 sys 0m7.576s 00:32:00.417 22:58:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:00.417 22:58:45 -- common/autotest_common.sh@10 -- # set +x 00:32:00.417 ************************************ 00:32:00.417 END TEST nvmf_bdevperf 00:32:00.417 ************************************ 00:32:00.417 22:58:45 -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:00.417 22:58:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:00.417 22:58:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:00.417 22:58:45 -- common/autotest_common.sh@10 -- # set +x 00:32:00.417 ************************************ 00:32:00.417 START TEST nvmf_target_disconnect 00:32:00.417 ************************************ 00:32:00.417 22:58:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:00.417 * Looking for test storage... 00:32:00.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:00.417 22:58:45 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.417 22:58:45 -- nvmf/common.sh@7 -- # uname -s 00:32:00.417 22:58:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.417 22:58:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.417 22:58:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.417 22:58:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.417 22:58:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.417 22:58:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.417 22:58:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.417 22:58:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.417 22:58:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.417 22:58:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.417 22:58:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:00.417 22:58:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:00.417 22:58:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.417 22:58:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.417 22:58:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.417 22:58:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.417 22:58:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.417 22:58:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.417 22:58:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.417 22:58:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.417 22:58:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.417 22:58:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.417 22:58:45 -- paths/export.sh@5 -- # export PATH 00:32:00.417 22:58:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.417 22:58:45 -- nvmf/common.sh@46 -- # : 0 00:32:00.417 22:58:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:00.417 22:58:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:00.417 22:58:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:00.417 22:58:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.417 22:58:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.417 22:58:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:00.417 22:58:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:00.417 22:58:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:00.417 22:58:45 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:00.417 22:58:45 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:00.417 22:58:45 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:00.417 22:58:45 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:32:00.417 22:58:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:00.417 22:58:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.417 22:58:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:00.417 22:58:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:00.417 22:58:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:00.417 22:58:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.417 22:58:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:00.417 22:58:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.417 22:58:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:00.417 22:58:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:00.417 22:58:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:00.417 22:58:45 -- common/autotest_common.sh@10 -- # set +x 00:32:08.609 22:58:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:08.609 22:58:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:08.609 22:58:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:08.609 22:58:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:08.609 22:58:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:08.609 22:58:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:08.609 22:58:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:08.609 22:58:52 -- nvmf/common.sh@294 -- # net_devs=() 00:32:08.609 22:58:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:08.609 22:58:52 -- nvmf/common.sh@295 -- # e810=() 00:32:08.609 22:58:52 -- nvmf/common.sh@295 -- # local -ga e810 00:32:08.609 22:58:52 -- nvmf/common.sh@296 -- # x722=() 00:32:08.609 22:58:52 -- nvmf/common.sh@296 -- # local -ga x722 00:32:08.609 22:58:52 -- nvmf/common.sh@297 -- # mlx=() 00:32:08.609 22:58:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:08.609 22:58:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:08.609 22:58:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:08.609 22:58:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:08.609 22:58:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:08.609 22:58:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:08.609 22:58:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:08.609 22:58:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:08.609 22:58:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:08.609 22:58:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:08.609 22:58:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:08.609 22:58:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:08.609 22:58:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:08.609 22:58:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:08.609 22:58:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:08.609 22:58:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:08.609 22:58:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:08.609 22:58:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:08.609 22:58:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:08.609 22:58:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:08.609 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:08.609 22:58:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:08.609 22:58:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:08.609 22:58:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.609 22:58:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.609 22:58:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:08.609 22:58:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:08.609 22:58:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:08.609 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:08.609 22:58:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:08.609 22:58:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:08.609 22:58:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.609 22:58:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.609 22:58:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:08.609 22:58:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:08.609 22:58:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:08.609 22:58:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:08.609 22:58:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:08.609 22:58:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.609 22:58:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:08.609 22:58:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.609 22:58:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:08.609 Found net devices under 0000:31:00.0: cvl_0_0 00:32:08.609 22:58:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.609 22:58:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:08.609 22:58:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.609 22:58:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:08.609 22:58:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.609 22:58:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:08.609 Found net devices under 0000:31:00.1: cvl_0_1 00:32:08.609 22:58:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.609 22:58:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:08.609 22:58:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:08.609 22:58:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:08.609 22:58:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:08.609 22:58:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:08.609 22:58:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:08.609 22:58:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:08.609 22:58:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:08.609 22:58:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:08.609 22:58:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:08.609 22:58:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:08.609 22:58:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:08.609 22:58:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:08.609 22:58:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:08.609 22:58:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:08.609 22:58:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:08.609 22:58:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:08.609 22:58:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:08.609 22:58:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:08.609 22:58:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:08.609 22:58:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:08.609 22:58:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:08.609 22:58:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:08.609 22:58:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:08.609 22:58:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:08.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:08.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:32:08.609 00:32:08.609 --- 10.0.0.2 ping statistics --- 00:32:08.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.609 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:32:08.609 22:58:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:08.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:08.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:32:08.609 00:32:08.609 --- 10.0.0.1 ping statistics --- 00:32:08.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.609 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:32:08.609 22:58:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:08.609 22:58:53 -- nvmf/common.sh@410 -- # return 0 00:32:08.609 22:58:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:08.609 22:58:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:08.609 22:58:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:08.609 22:58:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:08.609 22:58:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:08.609 22:58:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:08.609 22:58:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:08.609 22:58:53 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:08.609 22:58:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:08.609 22:58:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:08.609 22:58:53 -- common/autotest_common.sh@10 -- # set +x 00:32:08.609 ************************************ 00:32:08.609 START TEST nvmf_target_disconnect_tc1 00:32:08.609 ************************************ 00:32:08.609 22:58:53 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:32:08.609 22:58:53 -- host/target_disconnect.sh@32 -- # set +e 00:32:08.609 22:58:53 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:08.609 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.609 [2024-04-15 22:58:53.298920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.609 [2024-04-15 22:58:53.299387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.609 [2024-04-15 22:58:53.299403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1678310 with addr=10.0.0.2, port=4420 00:32:08.609 [2024-04-15 22:58:53.299432] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:08.609 [2024-04-15 22:58:53.299443] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:08.609 [2024-04-15 22:58:53.299452] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:32:08.609 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:32:08.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:08.609 Initializing NVMe Controllers 00:32:08.609 22:58:53 -- host/target_disconnect.sh@33 -- # trap - ERR 00:32:08.609 22:58:53 -- host/target_disconnect.sh@33 -- # print_backtrace 00:32:08.609 22:58:53 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:32:08.609 22:58:53 -- common/autotest_common.sh@1132 -- # return 0 00:32:08.609 22:58:53 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:32:08.609 22:58:53 -- host/target_disconnect.sh@41 -- # set -e 00:32:08.609 00:32:08.609 real 0m0.112s 00:32:08.609 user 0m0.038s 00:32:08.609 sys 0m0.072s 00:32:08.609 22:58:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:08.609 22:58:53 -- common/autotest_common.sh@10 -- # set +x 00:32:08.610 ************************************ 00:32:08.610 END TEST nvmf_target_disconnect_tc1 00:32:08.610 ************************************ 00:32:08.610 22:58:53 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:08.610 22:58:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:08.610 22:58:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:08.610 22:58:53 -- common/autotest_common.sh@10 -- # set +x 00:32:08.610 ************************************ 00:32:08.610 START TEST nvmf_target_disconnect_tc2 00:32:08.610 ************************************ 00:32:08.610 22:58:53 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:32:08.610 22:58:53 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:32:08.610 22:58:53 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:08.610 22:58:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:08.610 22:58:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:08.610 22:58:53 -- common/autotest_common.sh@10 -- # set +x 00:32:08.610 22:58:53 -- nvmf/common.sh@469 -- # nvmfpid=1337585 00:32:08.610 22:58:53 -- nvmf/common.sh@470 -- # waitforlisten 1337585 00:32:08.610 22:58:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:08.610 22:58:53 -- common/autotest_common.sh@819 -- # '[' -z 1337585 ']' 00:32:08.610 22:58:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.610 22:58:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:08.610 22:58:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:08.610 22:58:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:08.610 22:58:53 -- common/autotest_common.sh@10 -- # set +x 00:32:08.610 [2024-04-15 22:58:53.416538] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:32:08.610 [2024-04-15 22:58:53.416607] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:08.870 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.870 [2024-04-15 22:58:53.509685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:08.870 [2024-04-15 22:58:53.602697] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:08.870 [2024-04-15 22:58:53.602854] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:08.870 [2024-04-15 22:58:53.602864] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:08.870 [2024-04-15 22:58:53.602872] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:08.870 [2024-04-15 22:58:53.603413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:32:08.870 [2024-04-15 22:58:53.603565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:32:08.870 [2024-04-15 22:58:53.603714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:32:08.870 [2024-04-15 22:58:53.603835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:32:09.442 22:58:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:09.442 22:58:54 -- common/autotest_common.sh@852 -- # return 0 00:32:09.442 22:58:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:09.442 22:58:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:09.442 22:58:54 -- common/autotest_common.sh@10 -- # set +x 00:32:09.704 22:58:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.704 22:58:54 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:09.704 22:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.704 22:58:54 -- common/autotest_common.sh@10 -- # set +x 00:32:09.704 Malloc0 00:32:09.704 22:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.704 22:58:54 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:09.704 22:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.704 22:58:54 -- common/autotest_common.sh@10 -- # set +x 00:32:09.704 [2024-04-15 22:58:54.284069] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:09.704 22:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.704 22:58:54 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:09.704 22:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.704 22:58:54 -- common/autotest_common.sh@10 -- # set +x 00:32:09.704 22:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.704 22:58:54 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:09.704 22:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.704 22:58:54 -- common/autotest_common.sh@10 -- # set +x 00:32:09.704 22:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.704 22:58:54 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.704 22:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.704 22:58:54 -- common/autotest_common.sh@10 -- # set +x 00:32:09.704 [2024-04-15 22:58:54.324439] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.704 22:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.704 22:58:54 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:09.704 22:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.704 22:58:54 -- common/autotest_common.sh@10 -- # set +x 00:32:09.704 22:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.704 22:58:54 -- host/target_disconnect.sh@50 -- # reconnectpid=1337924 00:32:09.704 22:58:54 -- host/target_disconnect.sh@52 -- # sleep 2 00:32:09.704 22:58:54 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:09.704 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.625 22:58:56 -- host/target_disconnect.sh@53 -- # kill -9 1337585 00:32:11.625 22:58:56 -- host/target_disconnect.sh@55 -- # sleep 2 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 [2024-04-15 22:58:56.364710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 [2024-04-15 22:58:56.364992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 [2024-04-15 22:58:56.365249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Read completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.626 starting I/O failed 00:32:11.626 Write completed with error (sct=0, sc=8) 00:32:11.627 starting I/O failed 00:32:11.627 Write completed with error (sct=0, sc=8) 00:32:11.627 starting I/O failed 00:32:11.627 Write completed with error (sct=0, sc=8) 00:32:11.627 starting I/O failed 00:32:11.627 Read completed with error (sct=0, sc=8) 00:32:11.627 starting I/O failed 00:32:11.627 Write completed with error (sct=0, sc=8) 00:32:11.627 starting I/O failed 00:32:11.627 Read completed with error (sct=0, sc=8) 00:32:11.627 starting I/O failed 00:32:11.627 Read completed with error (sct=0, sc=8) 00:32:11.627 starting I/O failed 00:32:11.627 Write completed with error (sct=0, sc=8) 00:32:11.627 starting I/O failed 00:32:11.627 Read completed with error (sct=0, sc=8) 00:32:11.627 starting I/O failed 00:32:11.627 Read completed with error (sct=0, sc=8) 00:32:11.627 starting I/O failed 00:32:11.627 Write completed with error (sct=0, sc=8) 00:32:11.627 starting I/O failed 00:32:11.627 Write completed with error (sct=0, sc=8) 00:32:11.627 starting I/O failed 00:32:11.627 Read completed with error (sct=0, sc=8) 00:32:11.627 starting I/O failed 00:32:11.627 Write completed with error (sct=0, sc=8) 00:32:11.627 starting I/O failed 00:32:11.627 Write completed with error (sct=0, sc=8) 00:32:11.627 starting I/O failed 00:32:11.627 [2024-04-15 22:58:56.365598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:11.627 [2024-04-15 22:58:56.366036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.366451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.366464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa650000b90 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.367037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.367365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.367377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.367842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.368221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.368235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.368495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.368792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.368805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.369159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.369518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.369528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.369851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.370027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.370039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.370247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.370589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.370600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.370831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.371159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.371170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.371362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.371618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.371628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.372011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.372405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.372416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.372599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.372981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.372992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.373333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.373562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.373572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.373900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.374231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.374241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.374432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.374629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.374641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.374934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.375248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.375259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.375634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.375918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.375928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.376290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.376572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.376586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.376867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.377210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.377221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.377599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.377979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.377990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.378354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.378598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.378609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.378952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.379300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.379310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.379637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.379794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.379804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.380079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.380416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.380426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.380842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.381171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.381182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.381551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.381891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.627 [2024-04-15 22:58:56.381902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.627 qpair failed and we were unable to recover it. 00:32:11.627 [2024-04-15 22:58:56.382204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.382554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.382564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.382929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.383157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.383167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.383537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.383870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.383880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.384192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.384567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.384578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.384964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.385305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.385315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.385662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.386000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.386011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.386365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.386585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.386595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.386921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.387286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.387297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.387564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.387896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.387908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.388281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.388634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.388644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.389036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.389386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.389396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.389763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.390106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.390116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.390440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.390800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.390811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.391187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.391456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.391466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.391689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.392054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.392065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.392443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.392801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.392811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.393188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.393583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.393594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.393853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.394234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.394244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.394616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.394962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.394972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.395352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.395711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.395722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.396076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.396382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.396392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.396748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.397105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.397115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.397485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.397711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.397721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.398019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.398363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.398373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.398577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.398923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.398933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.399195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.399521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.399531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.399881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.400275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.400285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.400631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.401001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.401011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.401225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.401617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.401627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.628 qpair failed and we were unable to recover it. 00:32:11.628 [2024-04-15 22:58:56.402011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.628 [2024-04-15 22:58:56.402239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.402249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.402592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.402928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.402939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.403140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.403512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.403521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.403875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.404233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.404247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.404619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.404970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.404980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.405353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.405688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.405699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.406003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.406325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.406335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.406511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.406857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.406868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.407242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.407568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.407578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.407944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.408289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.408300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.408672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.408884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.408895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.409134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.409522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.409533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.409891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.410271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.410281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.410628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.411008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.411021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.411370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.411680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.411691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.412052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.412401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.412411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.412783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.413124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.413135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.413479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.413816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.413827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.414168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.414554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.414565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.414943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.415326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.415336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.415699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.416081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.416091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.416433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.416787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.416798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.417170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.417559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.417570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.417928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.418265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.418275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.418613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.419001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.419011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.419382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.419730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.419741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.420111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.420462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.420473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.420814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.421175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.421186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.629 [2024-04-15 22:58:56.421560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.421877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.629 [2024-04-15 22:58:56.421888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.629 qpair failed and we were unable to recover it. 00:32:11.630 [2024-04-15 22:58:56.422107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.422449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.422459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.630 qpair failed and we were unable to recover it. 00:32:11.630 [2024-04-15 22:58:56.422813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.423154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.423165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.630 qpair failed and we were unable to recover it. 00:32:11.630 [2024-04-15 22:58:56.423537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.423992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.424002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.630 qpair failed and we were unable to recover it. 00:32:11.630 [2024-04-15 22:58:56.424343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.424578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.424588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.630 qpair failed and we were unable to recover it. 00:32:11.630 [2024-04-15 22:58:56.424948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.425330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.425340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.630 qpair failed and we were unable to recover it. 00:32:11.630 [2024-04-15 22:58:56.425680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.426034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.426044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.630 qpair failed and we were unable to recover it. 00:32:11.630 [2024-04-15 22:58:56.426422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.426747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.426758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.630 qpair failed and we were unable to recover it. 00:32:11.630 [2024-04-15 22:58:56.427113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.427493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.427503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.630 qpair failed and we were unable to recover it. 00:32:11.630 [2024-04-15 22:58:56.427873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.428254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.428264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.630 qpair failed and we were unable to recover it. 00:32:11.630 [2024-04-15 22:58:56.428638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.429004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.429015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.630 qpair failed and we were unable to recover it. 00:32:11.630 [2024-04-15 22:58:56.429365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.429750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.429761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.630 qpair failed and we were unable to recover it. 00:32:11.630 [2024-04-15 22:58:56.430129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.430511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.630 [2024-04-15 22:58:56.430521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.630 qpair failed and we were unable to recover it. 00:32:11.630 [2024-04-15 22:58:56.430929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.431270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.431281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.431670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.432021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.432033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.432375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.432741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.432752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.433136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.433519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.433529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.433875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.434260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.434270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.434641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.434969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.434979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.435353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.435652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.435663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.436033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.436414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.436425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.436803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.437221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.437231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.437534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.437873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.437883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.438236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.438575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.438585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.438970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.439357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.439367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.439588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.439968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.439978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.440328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.440712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.440725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.440925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.441294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.441305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.441681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.442022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.442032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.442389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.442744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.442755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.443090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.443421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.443432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.443806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.444029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.444039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.444464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.444734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.444745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.445093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.445309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.445320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.445680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.446040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.446050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.446403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.446674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.446685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.447044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.447425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.447436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.447781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.448129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.448140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.448492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.448868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.448879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.449249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.449634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.449645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.450020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.450405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.900 [2024-04-15 22:58:56.450416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.900 qpair failed and we were unable to recover it. 00:32:11.900 [2024-04-15 22:58:56.450762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.451108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.451119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.451497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.451863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.451875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.452284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.452623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.452633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.452972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.453245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.453256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.453635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.454011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.454021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.454393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.454733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.454743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.455056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.455426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.455436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.455801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.456186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.456196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.456571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.456915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.456925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.457278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.457619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.457630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.458011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.458392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.458403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.458759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.459103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.459113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.459419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.459754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.459765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.460138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.460523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.460534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.460880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.461266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.461276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.461627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.461965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.461975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.462336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.462723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.462734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.463062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.463451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.463462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.463827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.464187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.464197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.464414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.464759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.464769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.465142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.465480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.465490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.465850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.466204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.466215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.466588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.466945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.466955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.467326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.467687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.467698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.468003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.468385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.468396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.468773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.469112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.469122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.469382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.469766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.469777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.470086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.470446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.470457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.470800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.471179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.471189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.901 qpair failed and we were unable to recover it. 00:32:11.901 [2024-04-15 22:58:56.471563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.901 [2024-04-15 22:58:56.471869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.471879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.472236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.472568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.472579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.472952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.473334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.473345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.473719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.474056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.474067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.474418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.474793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.474804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.475177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.475560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.475571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.475924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.476256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.476266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.476622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.476996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.477009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.477384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.477638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.477648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.478004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.478211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.478222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.478579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.478899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.478909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.479278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.479659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.479670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.479876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.480243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.480253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.480606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.480995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.481006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.481376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.481720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.481731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.481953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.482310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.482319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.482671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.483026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.483037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.483313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.483699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.483709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.484094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.484347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.484358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.484711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.485066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.485076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.485450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.485828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.485838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.486223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.486604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.486614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.486963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.487346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.487356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.487723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.488111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.488122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.488541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.488888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.488899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.489240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.489449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.489460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.489793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.490176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.490187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.490559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.490914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.490924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.491352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.491660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.491670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.902 qpair failed and we were unable to recover it. 00:32:11.902 [2024-04-15 22:58:56.492044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.902 [2024-04-15 22:58:56.492430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.492441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.492797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.493179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.493190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.493549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.493910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.493920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.494299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.494636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.494647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.494985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.495366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.495376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.495827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.496172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.496182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.496554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.496795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.496806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.497180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.497555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.497566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.497937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.498316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.498326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.498699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.499049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.499060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.499464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.499779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.499789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.500030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.500411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.500421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.500792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.501179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.501189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.501561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.501904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.501914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.502264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.502609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.502619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.502993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.503335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.503345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.503697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.504030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.504040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.504423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.504802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.504813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.505193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.505574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.505584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.505931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.506314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.506326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.506678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.507033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.507044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.507390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.507732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.507742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.508115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.508496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.508506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.508834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.509128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.509139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.509505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.509882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.509893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.510263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.510647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.510658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.511003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.511386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.511396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.511743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.512018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.512028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.512352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.512691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.512701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.903 qpair failed and we were unable to recover it. 00:32:11.903 [2024-04-15 22:58:56.513049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.903 [2024-04-15 22:58:56.513430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.513442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.513808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.514029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.514039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.514413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.514790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.514800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.515152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.515539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.515553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.515896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.516242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.516252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.516625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.517011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.517021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.517226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.517541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.517554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.517787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.518174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.518184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.518615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.518983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.518993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.519299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.519664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.519675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.520027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.520408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.520418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.520795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.521132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.521142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.521494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.521839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.521850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.522199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.522536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.522552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.522876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.523259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.523270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.523650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.524018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.524028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.524376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.524687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.524698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.525069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.525279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.525290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.525645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.526009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.526019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.526384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.526757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.526768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.527122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.527498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.527508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.527880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.528267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.528278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.528633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.528998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.529010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.904 qpair failed and we were unable to recover it. 00:32:11.904 [2024-04-15 22:58:56.529381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.529725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.904 [2024-04-15 22:58:56.529735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.530035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.530394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.530404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.530774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.531160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.531170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.531475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.531834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.531845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.532215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.532600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.532611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.532919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.533291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.533302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.533678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.534020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.534030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.534382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.534674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.534685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.534915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.535253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.535264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.535617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.536002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.536013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.536386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.536764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.536774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.537129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.537468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.537479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.537823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.538185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.538195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.538545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.538913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.538924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.539293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.539585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.539596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.539937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.540321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.540332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.540704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.541087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.541097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.541445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.541800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.541811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.542182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.542520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.542532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.542942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.543299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.543310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.543652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.544031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.544042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.544390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.544701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.544711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.545080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.545470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.545481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.545832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.546169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.546179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.546554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.546938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.546948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.547255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.547622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.547634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.548007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.548365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.548376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.548725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.549062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.549072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.549444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.549804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.549814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.905 [2024-04-15 22:58:56.550163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.550506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.905 [2024-04-15 22:58:56.550516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.905 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.550814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.551184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.551194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.551547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.551849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.551860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.552236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.552622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.552633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.552989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.553310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.553321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.553655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.554038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.554048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.554387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.554761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.554772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.555105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.555487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.555497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.555840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.556227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.556237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.556614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.556959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.556969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.557320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.557485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.557496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.557854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.558148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.558158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.558497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.558875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.558886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.559256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.559597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.559608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.559959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.560299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.560310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.560679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.561030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.561040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.561387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.561655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.561665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.561996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.562224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.562235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.562614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.563000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.563010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.563379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.563607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.563618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.563967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.564346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.564356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.564733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.565116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.565127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.565462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.565809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.565820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.566190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.566444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.566455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.566808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.567184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.567195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.567573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.567920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.567931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.568327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.568706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.568716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.569051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.569391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.569403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.569752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.570146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.570156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.906 [2024-04-15 22:58:56.570356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.570687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.906 [2024-04-15 22:58:56.570698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.906 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.571047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.571394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.571405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.571749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.572111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.572121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.572472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.572816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.572827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.573028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.573391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.573401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.573762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.574155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.574165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.574571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.574875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.574885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.575233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.575616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.575626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.576000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.576338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.576349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.576786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.577096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.577107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.577450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.577799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.577810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.578009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.578339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.578354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.578693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.579051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.579062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.579433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.579749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.579760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.580016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.580395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.580406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.580757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.581146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.581156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.581528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.581909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.581920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.582274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.582660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.582670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.583057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.583439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.583449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.583808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.584133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.584144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.584477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.584828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.584839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.585190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.585572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.585583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.585940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.586191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.586202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.586551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.586908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.586918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.587294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.587634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.587645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.588002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.588351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.588361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.588691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.589041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.589052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.589399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.589667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.589678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.590027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.590368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.590378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.907 [2024-04-15 22:58:56.590730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.591062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.907 [2024-04-15 22:58:56.591072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.907 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.591404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.591743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.591753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.592105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.592483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.592494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.592834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.593175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.593185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.593538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.593914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.593924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.594276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.594615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.594626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.594982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.595366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.595377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.595731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.596078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.596088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.596442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.596669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.596680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.597037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.597303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.597314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.597719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.597986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.597996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.598357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.598745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.598756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.599108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.599376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.599387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.599764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.600147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.600158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.600499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.600877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.600887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.601261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.601475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.601486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.601847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.602186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.602196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.602529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.602906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.602917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.603233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.603627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.603638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.603986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.604369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.604379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.604734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.605065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.605076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.605436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.605792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.605803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.606117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.606501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.606511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.606886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.607226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.607239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.607594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.607939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.607949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.608324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.608714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.608725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.609081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.609464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.609475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.609856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.610194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.610205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.610555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.610916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.610926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.908 [2024-04-15 22:58:56.611299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.611653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.908 [2024-04-15 22:58:56.611664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.908 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.612031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.612412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.612422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.612799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.613180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.613191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.613539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.613905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.613916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.614287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.614622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.614635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.615001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.615386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.615396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.615598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.615976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.615986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.616339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.616725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.616736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.617107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.617447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.617458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.617831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.618220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.618230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.618609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.618952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.618962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.619331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.619713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.619723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.620098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.620481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.620491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.620892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.621276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.621286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.621614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.621983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.621994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.622298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.622661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.622671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.623023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.623371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.623382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.623736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.624118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.624128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.624439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.624751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.624761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.625105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.625441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.625451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.625803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.626186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.626196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.626509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.626704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.626715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.627071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.627427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.627438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.627805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.628158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.628169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.628580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.628882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.628893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.909 [2024-04-15 22:58:56.629259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.629511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.909 [2024-04-15 22:58:56.629521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.909 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.629892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.630269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.630279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.630589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.630951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.630962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.631334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.631659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.631671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.632011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.632398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.632409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.632784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.633168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.633179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.633530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.633906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.633916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.634289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.634745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.634784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.635147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.635536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.635553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.635897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.636235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.636246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.636608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.636940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.636950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.637305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.637608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.637619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.637967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.638307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.638317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.638722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.639055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.639065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.639423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.639787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.639798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.640176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.640559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.640569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.640881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.641224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.641234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.641607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.641993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.642003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.642355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.642698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.642709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.643039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.643424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.643434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.643810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.644066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.644078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.644448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.644792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.644803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.645058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.645393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.645404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.645712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.646072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.646082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.646437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.646705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.646717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.647023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.647388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.647398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.647755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.648142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.648152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.648490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.648829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.648840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.649191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.649467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.649477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.910 [2024-04-15 22:58:56.649833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.650218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.910 [2024-04-15 22:58:56.650228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.910 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.650578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.650898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.650908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.651283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.651674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.651685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.652035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.652375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.652386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.652811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.653148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.653158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.653590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.653895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.653906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.654283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.654501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.654515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.654884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.655269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.655279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.655621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.656005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.656015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.656365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.656747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.656758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.657134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.657516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.657527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.657875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.658112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.658124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.658501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.658880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.658891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.659093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.659412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.659422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.659768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.660001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.660011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.660365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.660753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.660764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.661005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.661391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.661400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.661752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.662140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.662150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.662526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.662906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.662917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.663271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.663651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.663662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.663995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.664380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.664390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.664748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.665141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.665151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.665497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.665751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.665761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.666070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.666381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.666392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.666768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.667132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.667142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.667576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.667896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.667906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.668279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.668623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.668634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.668957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.669348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.669358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.669740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.670133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.670143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.670494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.670833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.911 [2024-04-15 22:58:56.670843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.911 qpair failed and we were unable to recover it. 00:32:11.911 [2024-04-15 22:58:56.671218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.671599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.671609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.671961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.672342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.672353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.672764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.673063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.673073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.673431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.673788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.673799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.674171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.674554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.674565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.674928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.675267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.675278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.675654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.676008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.676019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.676394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.676734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.676745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.677095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.677478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.677489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.677914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.678254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.678264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.678648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.679016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.679026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.679381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.679728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.679739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.680111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.680454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.680466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.680819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.681201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.681211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.681586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.681910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.681920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.682275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.682633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.682643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.682973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.683337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.683347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.683736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.684122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.684132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.684540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.684870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.684880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.685186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.685555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.685566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.685772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.686050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.686061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.686427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.686791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.686802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.687124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.687467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.687477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.687837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.688220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.688230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.688601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.688941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.688951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.689303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.689685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.689696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.690070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.690456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.690465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.690818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.691044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.691055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.691405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.691709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.691719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.912 qpair failed and we were unable to recover it. 00:32:11.912 [2024-04-15 22:58:56.692065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.912 [2024-04-15 22:58:56.692404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.692414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.913 qpair failed and we were unable to recover it. 00:32:11.913 [2024-04-15 22:58:56.692780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.693124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.693134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.913 qpair failed and we were unable to recover it. 00:32:11.913 [2024-04-15 22:58:56.693470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.693826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.693837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.913 qpair failed and we were unable to recover it. 00:32:11.913 [2024-04-15 22:58:56.694210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.694579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.694590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.913 qpair failed and we were unable to recover it. 00:32:11.913 [2024-04-15 22:58:56.694896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.695280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.695290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.913 qpair failed and we were unable to recover it. 00:32:11.913 [2024-04-15 22:58:56.695650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.695971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.695981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.913 qpair failed and we were unable to recover it. 00:32:11.913 [2024-04-15 22:58:56.696228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.696570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.696580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.913 qpair failed and we were unable to recover it. 00:32:11.913 [2024-04-15 22:58:56.696938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.697322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.697332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.913 qpair failed and we were unable to recover it. 00:32:11.913 [2024-04-15 22:58:56.697686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.698038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.698048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.913 qpair failed and we were unable to recover it. 00:32:11.913 [2024-04-15 22:58:56.698411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.698792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.698802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.913 qpair failed and we were unable to recover it. 00:32:11.913 [2024-04-15 22:58:56.699155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.699493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.699504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.913 qpair failed and we were unable to recover it. 00:32:11.913 [2024-04-15 22:58:56.699856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.700194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.913 [2024-04-15 22:58:56.700204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:11.913 qpair failed and we were unable to recover it. 00:32:12.186 [2024-04-15 22:58:56.700536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.700906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.700917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.186 qpair failed and we were unable to recover it. 00:32:12.186 [2024-04-15 22:58:56.701235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.701626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.701637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.186 qpair failed and we were unable to recover it. 00:32:12.186 [2024-04-15 22:58:56.702000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.702348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.702359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.186 qpair failed and we were unable to recover it. 00:32:12.186 [2024-04-15 22:58:56.702730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.703088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.703098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.186 qpair failed and we were unable to recover it. 00:32:12.186 [2024-04-15 22:58:56.703443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.703807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.703819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.186 qpair failed and we were unable to recover it. 00:32:12.186 [2024-04-15 22:58:56.704192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.704576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.704586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.186 qpair failed and we were unable to recover it. 00:32:12.186 [2024-04-15 22:58:56.704939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.705297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.705307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.186 qpair failed and we were unable to recover it. 00:32:12.186 [2024-04-15 22:58:56.705525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.705796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.705806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.186 qpair failed and we were unable to recover it. 00:32:12.186 [2024-04-15 22:58:56.706235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.706530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.706540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.186 qpair failed and we were unable to recover it. 00:32:12.186 [2024-04-15 22:58:56.706887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.707235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.707245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.186 qpair failed and we were unable to recover it. 00:32:12.186 [2024-04-15 22:58:56.707604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.707955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.707966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.186 qpair failed and we were unable to recover it. 00:32:12.186 [2024-04-15 22:58:56.708336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.186 [2024-04-15 22:58:56.708674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.708685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.709036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.709358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.709370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.709704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.710075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.710086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.710435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.710795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.710805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.711190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.711576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.711586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.711938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.712323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.712333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.712713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.713103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.713113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.713536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.713862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.713872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.714246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.714588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.714599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.714975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.715199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.715209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.715509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.715869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.715880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.716230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.716613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.716626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.716995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.717375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.717386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.717691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.718054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.718065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.718438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.718789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.718800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.719159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.719547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.719558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.719905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.720172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.720183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.720528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.720905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.720916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.721292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.721633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.721644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.722069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.722429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.722439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.722810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.723192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.723202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.723556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.723906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.723917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.724302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.724632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.724642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.724982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.725324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.725334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.725715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.726099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.726109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.726460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.726703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.726713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.727024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.727395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.727405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.727758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.728104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.728114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.187 [2024-04-15 22:58:56.728487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.728794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.187 [2024-04-15 22:58:56.728805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.187 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.729009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.729365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.729375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.729622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.730006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.730016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.730372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.730716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.730726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.731101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.731445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.731456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.731825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.732209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.732220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.732598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.732936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.732947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.733298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.733677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.733688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.734040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.734423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.734433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.734807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.735189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.735200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.735572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.735915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.735925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.736315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.736612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.736622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.736978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.737317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.737327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.737694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.738057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.738067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.738443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.738745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.738756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.739101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.739365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.739376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.739614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.740007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.740017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.740259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.740646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.740656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.740978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.741316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.741327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.741677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.742030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.742041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.742353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.742721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.742731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.743075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.743454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.743464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.743820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.744204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.744214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.744564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.744884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.744894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.745264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.745647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.745660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.746034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.746262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.746273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.746653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.747039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.747049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.747401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.747784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.747794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.748168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.748552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.748562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.748929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.749270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.188 [2024-04-15 22:58:56.749281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.188 qpair failed and we were unable to recover it. 00:32:12.188 [2024-04-15 22:58:56.749654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.749997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.750008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.750360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.750741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.750751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.751062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.751433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.751443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.751819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.752205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.752215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.752555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.752874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.752884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.753236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.753505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.753515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.753877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.754264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.754274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.754632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.755011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.755021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.755397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.755777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.755787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.756004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.756348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.756359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.756738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.757061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.757071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.757396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.757773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.757783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.758164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.758549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.758559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.758876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.759218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.759228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.759597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.759940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.759949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.760329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.760601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.760612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.761009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.761349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.761359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.761706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.762047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.762058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.762435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.762789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.762800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.763205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.763589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.763600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.763975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.764359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.764369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.764717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.765061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.765073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.765449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.765786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.765797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.766151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.766488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.766498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.766853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.767192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.767202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.767548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.767904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.767914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.768285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.768652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.768662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.189 qpair failed and we were unable to recover it. 00:32:12.189 [2024-04-15 22:58:56.769026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.189 [2024-04-15 22:58:56.769414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.769425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.769724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.770091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.770101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.770442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.770764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.770775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.771150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.771511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.771521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.771861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.772245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.772255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.772625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.773008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.773019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.773376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.773726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.773736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.774120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.774461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.774471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.774823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.775208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.775219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.775590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.775830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.775842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.776198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.776583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.776594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.776975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.777317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.777326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.777694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.778077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.778087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.778465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.778812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.778823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.779179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.779539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.779555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.779879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.780217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.780227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.780432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.780745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.780755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.781135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.781519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.781531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.781886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.782108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.782121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.782504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.782887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.782897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.783250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.783639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.783649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.783994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.784289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.784299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.784656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.785031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.785042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.785417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.785765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.785776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.786136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.786520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.786530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.786898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.787239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.787249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.787603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.787989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.787999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.788217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.788562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.788573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.788891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.789245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.789256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.190 [2024-04-15 22:58:56.789674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.790046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.190 [2024-04-15 22:58:56.790056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.190 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.790406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.790750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.790760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.791096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.791483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.791493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.791863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.792166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.792177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.792511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.792896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.792907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.793171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.793492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.793502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.793868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.794246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.794257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.794571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.794922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.794932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.795308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.795691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.795701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.796047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.796401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.796411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.796792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.797106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.797116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.797476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.797835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.797847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.798221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.798604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.798615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.798992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.799374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.799384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.799757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.800140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.800151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.800502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.800886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.800896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.801271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.801631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.801642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.802015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.802400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.802411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.802787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.803129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.803139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.803501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.803860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.803870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.804249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.804518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.804529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.804851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.805218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.805228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.805605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.805942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.805952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.806297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.806676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.806686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.191 [2024-04-15 22:58:56.807057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.807399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.191 [2024-04-15 22:58:56.807409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.191 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.807759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.808103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.808114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.808422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.808802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.808813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.809177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.809511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.809522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.809876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.810248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.810259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.810790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.811163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.811177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.811442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.811789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.811804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.812165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.812555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.812566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.812780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.813143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.813153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.813508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.813780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.813791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.814161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.814559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.814571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.814944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.815288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.815298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.815674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.816025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.816036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.816423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.816802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.816813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.817186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.817570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.817581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.817935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.818317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.818328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.818701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.819046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.819060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.819410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.819645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.819657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.819978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.820345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.820356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.820707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.820901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.820912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.821259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.821633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.821643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.821994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.822382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.822392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.822730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.823093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.823103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.823541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.823885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.823896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.824277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.824615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.824626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.824932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.825300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.825310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.825686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.825907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.825917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.826151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.826495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.826506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.826878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.827260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.827271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.192 [2024-04-15 22:58:56.827633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.827979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.192 [2024-04-15 22:58:56.827990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.192 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.828375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.828721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.828731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.829085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.829469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.829479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.829835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.830057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.830067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.830502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.830808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.830819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.831202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.831509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.831520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.831872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.832254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.832264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.832637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.833006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.833017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.833373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.833732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.833743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.834117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.834352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.834362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.834712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.835068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.835078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.835451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.835841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.835852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.836194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.836574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.836586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.836967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.837290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.837301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.837683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.838044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.838055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.838273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.838615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.838626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.838935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.839297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.839308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.839687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.839941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.839951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.840305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.840588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.840599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.840930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.841312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.841322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.841697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.842069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.842079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.842392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.842730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.842741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.843091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.843432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.843442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.843798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.844118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.844129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.844431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.844704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.844714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.845090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.845429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.845440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.845820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.846206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.846217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.193 [2024-04-15 22:58:56.846623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.846927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.193 [2024-04-15 22:58:56.846937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.193 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.847283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.847606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.847619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.847973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.848200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.848212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.848564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.849003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.849013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.849386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.849747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.849758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.850108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.850444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.850454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.850808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.850967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.850978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.851289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.851674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.851685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.852059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.852399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.852410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.852614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.853000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.853011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.853382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.853732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.853742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.854092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.854414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.854424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.854743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.855107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.855117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.855466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.855806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.855816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.856185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.856566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.856578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.856993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.857380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.857391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.857745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.857956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.857967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.858321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.858717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.858727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.859105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.859447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.859457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.859802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.860149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.860160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.860531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.860785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.860796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.861146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.861528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.861539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.861882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.862114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.862124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.862475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.862798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.862808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.863189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.863576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.863587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.863944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.864326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.864337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.864706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.865049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.865059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.865416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.865743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.865753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.866130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.866501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.866511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.866898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.867278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.194 [2024-04-15 22:58:56.867289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.194 qpair failed and we were unable to recover it. 00:32:12.194 [2024-04-15 22:58:56.867666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.868009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.868020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.868376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.868754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.868764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.869088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.869428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.869438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.869864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.870201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.870211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.870585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.870930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.870941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.871241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.871605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.871616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.871990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.872330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.872341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.872693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.873047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.873057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.873432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.873653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.873663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.873973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.874353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.874364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.874738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.875124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.875134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.875488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.875765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.875776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.876119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.876457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.876467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.876683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.877067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.877077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.877451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.877842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.877853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.878274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.878660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.878671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.879023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.879405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.879416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.879799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.880180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.880190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.880566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.880885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.880895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.881246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.881582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.881593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.882017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.882357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.882368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.882713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.883053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.883063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.883388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.883600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.883612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.195 [2024-04-15 22:58:56.883975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.884242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.195 [2024-04-15 22:58:56.884252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.195 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.884636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.884962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.884973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.885329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.885597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.885606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.885959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.886308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.886318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.886711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.887037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.887048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.887404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.887682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.887693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.888067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.888446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.888456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.888829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.889147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.889157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.889513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.889838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.889848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.890213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.890597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.890608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.890966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.891351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.891362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.891742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.892124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.892134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.892563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.892869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.892879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.893216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.893568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.893579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.893926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.894314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.894325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.894698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.895082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.895093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.895443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.895797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.895808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.896198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.896579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.896590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.896955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.897338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.897349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.897702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.898049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.898060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.898418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.898755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.898766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.899136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.899518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.899528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.899870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.900157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.900168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.900547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.900927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.900937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.901283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.901613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.901624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.901991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.902375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.902385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.902738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.902992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.903002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.903380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.903719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.903729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.904082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.904500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.904511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.196 [2024-04-15 22:58:56.904890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.905275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.196 [2024-04-15 22:58:56.905286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.196 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.905639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.906022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.906032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.906409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.906787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.906798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.907159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.907540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.907563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.907872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.908068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.908079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.908398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.908745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.908756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.909129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.909461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.909471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.909823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.910205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.910216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.910586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.910902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.910912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.911263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.911612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.911624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.911968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.912347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.912358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.912711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.913067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.913080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.913449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.913808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.913819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.914224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.914432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.914443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.914790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.915171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.915181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.915513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.915857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.915868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.916186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.916524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.916534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.916874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.917173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.917184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.917513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.917841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.917851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.918206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.918590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.918600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.918989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.919334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.919344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.919686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.919990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.920002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.920355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.920740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.920751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.921102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.921492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.921502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.921873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.922211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.922221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.922572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.922896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.922906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.923276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.923615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.923627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.923986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.924372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.924383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.924740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.925077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.925088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.925437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.925807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.925818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.197 qpair failed and we were unable to recover it. 00:32:12.197 [2024-04-15 22:58:56.926199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.197 [2024-04-15 22:58:56.926544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.926555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.926912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.927255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.927266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.927640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.928029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.928040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.928394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.928694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.928705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.929043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.929426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.929436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.929814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.930198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.930208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.930554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.930908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.930919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.931271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.931656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.931666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.931978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.932348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.932359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.932710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.933054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.933064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.933432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.933748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.933759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.934093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.934433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.934444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.934808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.935197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.935207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.935437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.935704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.935715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.935893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.936218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.936229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.936578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.936807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.936817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.937192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.937530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.937540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.937734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.938038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.938048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.938426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.938787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.938799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.939199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.939516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.939526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.939880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.940263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.940273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.940624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.941006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.941016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.941391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.941568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.941579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.941907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.942276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.942287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.942660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.942997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.943008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.943311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.943670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.943680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.944053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.944397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.944407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.944754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.945137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.945147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.945517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.945784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.945795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.198 qpair failed and we were unable to recover it. 00:32:12.198 [2024-04-15 22:58:56.946144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.198 [2024-04-15 22:58:56.946321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.946332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.946686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.947049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.947060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.947400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.947774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.947785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.948155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.948536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.948552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.948910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.949246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.949257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.949636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.950022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.950033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.950389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.950767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.950778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.951150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.951536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.951550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.951908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.952289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.952300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.952778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.953201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.953215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.953527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.953879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.953890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.954268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.954777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.954816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.955175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.955556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.955568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.955859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.956241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.956251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.956601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.956988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.956998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.957370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.957719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.957729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.958085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.958421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.958432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.958651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.958892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.958904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.959239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.959621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.959632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.959940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.960295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.960305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.960658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.961034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.961044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.961433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.961812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.961822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.962171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.962551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.962563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.962914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.963100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.963110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.963420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.963795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.963806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.964177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.964500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.964510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.964865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.965245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.965255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.965632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.966020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.966030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.966454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.966770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.966781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.199 qpair failed and we were unable to recover it. 00:32:12.199 [2024-04-15 22:58:56.967083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.199 [2024-04-15 22:58:56.967437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.967448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.967819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.968202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.968215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.968590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.968870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.968881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.969232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.969616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.969626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.969981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.970275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.970286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.970635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.971010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.971020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.971392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.971759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.971770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.972122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.972507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.972517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.972887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.973274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.973284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.973596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.973982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.973992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.974368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.974755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.974766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.975118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.975478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.975489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.975868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.976183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.976194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.976553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.976921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.976931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.977307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.977685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.977695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.978049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.978436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.978446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.978804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.979032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.979043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.979429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.979788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.979798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.980182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.980448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.980458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.980829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.981210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.981220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.981590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.981945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.200 [2024-04-15 22:58:56.981956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.200 qpair failed and we were unable to recover it. 00:32:12.200 [2024-04-15 22:58:56.982311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.491 [2024-04-15 22:58:56.982698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.491 [2024-04-15 22:58:56.982709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.491 qpair failed and we were unable to recover it. 00:32:12.491 [2024-04-15 22:58:56.983059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.491 [2024-04-15 22:58:56.983260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.491 [2024-04-15 22:58:56.983269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.491 qpair failed and we were unable to recover it. 00:32:12.491 [2024-04-15 22:58:56.983636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.491 [2024-04-15 22:58:56.983875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.491 [2024-04-15 22:58:56.983886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.491 qpair failed and we were unable to recover it. 00:32:12.491 [2024-04-15 22:58:56.984242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.491 [2024-04-15 22:58:56.984626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.491 [2024-04-15 22:58:56.984637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.491 qpair failed and we were unable to recover it. 00:32:12.491 [2024-04-15 22:58:56.984935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.491 [2024-04-15 22:58:56.985311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.491 [2024-04-15 22:58:56.985323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.491 qpair failed and we were unable to recover it. 00:32:12.491 [2024-04-15 22:58:56.985694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.491 [2024-04-15 22:58:56.985866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.491 [2024-04-15 22:58:56.985878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.491 qpair failed and we were unable to recover it. 00:32:12.491 [2024-04-15 22:58:56.986216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.491 [2024-04-15 22:58:56.986557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.491 [2024-04-15 22:58:56.986567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.491 qpair failed and we were unable to recover it. 00:32:12.491 [2024-04-15 22:58:56.986942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.491 [2024-04-15 22:58:56.987315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.491 [2024-04-15 22:58:56.987325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.491 qpair failed and we were unable to recover it. 00:32:12.491 [2024-04-15 22:58:56.987678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.491 [2024-04-15 22:58:56.988001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.491 [2024-04-15 22:58:56.988012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.491 qpair failed and we were unable to recover it. 00:32:12.491 [2024-04-15 22:58:56.988390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.988777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.988788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:56.989139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.989507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.989517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:56.989872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.990252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.990263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:56.990617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.990956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.990967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:56.991292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.991631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.991641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:56.991934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.992298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.992308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:56.992510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.992839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.992849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:56.993202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.993561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.993572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:56.993932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.994316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.994326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:56.994674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.995060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.995070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:56.995433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.995791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.995802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:56.996159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.996470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.996481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:56.996842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.997224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.997235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:56.997567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.997840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.997850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:56.998222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.998602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.998613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:56.998836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.999201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.999211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:56.999591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.999934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:56.999945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:57.000252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.000597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.000607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:57.000988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.001372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.001382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:57.001734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.002053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.002063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:57.002419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.002680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.002690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:57.003041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.003379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.003390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:57.003762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.004127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.004137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:57.004491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.004868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.004879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:57.005253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.005641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.005651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:57.006078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.006416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.006426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:57.006805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.007148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.007159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:57.007467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.007870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.492 [2024-04-15 22:58:57.007881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.492 qpair failed and we were unable to recover it. 00:32:12.492 [2024-04-15 22:58:57.008253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.008634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.008644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.009013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.009398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.009409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.009729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.010097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.010108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.010458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.010807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.010817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.011189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.011468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.011479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.011803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.012178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.012188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.012566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.013000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.013011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.013355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.013706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.013716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.014053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.014350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.014366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.014718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.015104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.015115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.015486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.015732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.015745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.015925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.016291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.016302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.016673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.017069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.017079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.017432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.017809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.017820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.018194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.018461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.018471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.018823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.019162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.019173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.019550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.019904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.019914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.020263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.020655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.020666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.021015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.021398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.021411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.021773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.022151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.022161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.022533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.022925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.022936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.023289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.023715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.023755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.024132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.024522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.024533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.024884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.025270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.493 [2024-04-15 22:58:57.025281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.493 qpair failed and we were unable to recover it. 00:32:12.493 [2024-04-15 22:58:57.025658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.026013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.026024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.026377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.026649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.026659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.026993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.027269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.027279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.027681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.028022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.028033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.028407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.028785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.028796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.029180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.029569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.029580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.029936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.030321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.030332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.030685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.030860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.030872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.031196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.031585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.031595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.031950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.032290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.032301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.032674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.033026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.033036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.033391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.033645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.033656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.034039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.034233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.034242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.034598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.034778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.034788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.035145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.035482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.035492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.035793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.036213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.036224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.036596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.036949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.036960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.037311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.037656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.037666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.038020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.038366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.038376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.038730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.039098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.039109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.039490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.039869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.039879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.040212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.040552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.040564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.040912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.041274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.041284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.041683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.042056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.042066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.042287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.042632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.042643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.042956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.043325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.043336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.494 [2024-04-15 22:58:57.043682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.043960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.494 [2024-04-15 22:58:57.043971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.494 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.044328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.044712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.044723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.045099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.045438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.045449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.045645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.046008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.046018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.046421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.046754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.046765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.047117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.047502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.047513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.047881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.048224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.048235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.048590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.048965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.048975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.049354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.049618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.049629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.049993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.050379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.050391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.050736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.051124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.051134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.051487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.051858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.051869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.052182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.052546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.052557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.052912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.053098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.053108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.053465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.053838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.053848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.054199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.054509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.054521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.054890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.055230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.055241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.055586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.055939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.055949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.056072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.056453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.056464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.056817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.057161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.057171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.057547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.057959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.057970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.058314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.058698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.058708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.059085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.059463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.059473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.059675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.059915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.059926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.060302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.060647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.060658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.061031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.061414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.061425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.061627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.062009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.062019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.062351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.062691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.062702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.063127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.063517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.063528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.495 [2024-04-15 22:58:57.063870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.064211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.495 [2024-04-15 22:58:57.064221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.495 qpair failed and we were unable to recover it. 00:32:12.496 [2024-04-15 22:58:57.064595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.064817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.064827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.496 qpair failed and we were unable to recover it. 00:32:12.496 [2024-04-15 22:58:57.065255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.065595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.065606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.496 qpair failed and we were unable to recover it. 00:32:12.496 [2024-04-15 22:58:57.065989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.066080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.066089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.496 qpair failed and we were unable to recover it. 00:32:12.496 [2024-04-15 22:58:57.066406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.066784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.066794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.496 qpair failed and we were unable to recover it. 00:32:12.496 [2024-04-15 22:58:57.067137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.067480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.067490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.496 qpair failed and we were unable to recover it. 00:32:12.496 [2024-04-15 22:58:57.067848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.068229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.068240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.496 qpair failed and we were unable to recover it. 00:32:12.496 [2024-04-15 22:58:57.068547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.068943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.068953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.496 qpair failed and we were unable to recover it. 00:32:12.496 [2024-04-15 22:58:57.069302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.069659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.069670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.496 qpair failed and we were unable to recover it. 00:32:12.496 [2024-04-15 22:58:57.069929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.070312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.070322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.496 qpair failed and we were unable to recover it. 00:32:12.496 [2024-04-15 22:58:57.070697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.070917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.070927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.496 qpair failed and we were unable to recover it. 00:32:12.496 [2024-04-15 22:58:57.071297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.071679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.071690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.496 qpair failed and we were unable to recover it. 00:32:12.496 [2024-04-15 22:58:57.072042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.072423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.072434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.496 qpair failed and we were unable to recover it. 00:32:12.496 [2024-04-15 22:58:57.072803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.073124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.073135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.496 qpair failed and we were unable to recover it. 00:32:12.496 [2024-04-15 22:58:57.073488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.073748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.073767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.496 qpair failed and we were unable to recover it. 00:32:12.496 [2024-04-15 22:58:57.074119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.074459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.074469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.496 qpair failed and we were unable to recover it. 00:32:12.496 [2024-04-15 22:58:57.074824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.075180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.075190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.496 qpair failed and we were unable to recover it. 00:32:12.496 [2024-04-15 22:58:57.075562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.496 [2024-04-15 22:58:57.075940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.075952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.076294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.076638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.076649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.077035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.077392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.077402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.077754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.078138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.078148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.078534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.078930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.078941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.079249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.079619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.079629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.079986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.080366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.080376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.080783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.081122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.081132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.081429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.081678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.081689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.082021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.082410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.082422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.082796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.083180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.083191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.083507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.083879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.083890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.084195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.084562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.084575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.084937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.085237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.085247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.085503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.085857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.085870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.086178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.086554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.086564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.086917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.087299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.087309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.087599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.087838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.087849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.088224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.088492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.088502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.088866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.089245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.089255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.089638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.089992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.090002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.090352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.090708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.090718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.091076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.091411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.091422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.091802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.092186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.092197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.092575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.092928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.092938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.093290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.093563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.093574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.093909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.094245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.094255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.094501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.094880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.094891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.095263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.095580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.095591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.497 qpair failed and we were unable to recover it. 00:32:12.497 [2024-04-15 22:58:57.095947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.497 [2024-04-15 22:58:57.096331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.096342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.096719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.097103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.097113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.097540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.097783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.097795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.098168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.098509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.098519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.098863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.099200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.099210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.099583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.099959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.099970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.100326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.100708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.100719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.100919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.101252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.101262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.101638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.102020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.102031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.102402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.102668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.102679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.103068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.103405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.103415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.103773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.104157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.104167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.104522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.104869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.104879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.105252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.105593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.105603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.105959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.106297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.106308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.106573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.106896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.106907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.107259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.107641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.107652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.108006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.108388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.108398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.108756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.109012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.109022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.109286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.109654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.109664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.110033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.110413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.110423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.110795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.111177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.111187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.498 qpair failed and we were unable to recover it. 00:32:12.498 [2024-04-15 22:58:57.111546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.111867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.498 [2024-04-15 22:58:57.111878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.112237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.112578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.112589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.112948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.113330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.113340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.113698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.114063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.114074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.114437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.114816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.114829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.115195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.115585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.115595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.115965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.116346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.116357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.116692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.117055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.117065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.117417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.117761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.117772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.118143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.118526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.118536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.118882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.119266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.119276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.119652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.120011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.120021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.120325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.120656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.120666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.120873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.121257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.121267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.121567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.121841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.121855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.122226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.122412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.122421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.122773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.123164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.123174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.123498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.123871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.123881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.124187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.124472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.124482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.124837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.125216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.125227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.125632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.126027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.126037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.126376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.126758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.126768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.127125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.127508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.127519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.127893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.128230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.128241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.128600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.128833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.128843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.129210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.129538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.129552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.129750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.130062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.130072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.130433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.130775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.130786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.131136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.131522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.499 [2024-04-15 22:58:57.131533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.499 qpair failed and we were unable to recover it. 00:32:12.499 [2024-04-15 22:58:57.131901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.132208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.132218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.132568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.132891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.132901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.133290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.133503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.133513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.133862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.134201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.134211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.134583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.134923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.134934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.135283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.135668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.135678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.136052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.136440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.136450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.136820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.137163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.137174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.137950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.139210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.139236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.139578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.139966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.139978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.140356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.140701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.140713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.141075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.141460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.141471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.141905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.142151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.142164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.142520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.142865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.142876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.143252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.143609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.143620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.143976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.144618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.144638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.145531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.145960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.145975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.146312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.146613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.146624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.147001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.147212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.147225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.147589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.147935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.147946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.148324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.148702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.148712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.149070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.149452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.149465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.149729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.150072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.150082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.150443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.150765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.150775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.151157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.151526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.151536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.151895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.152255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.152266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.152687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.152913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.500 [2024-04-15 22:58:57.152925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.500 qpair failed and we were unable to recover it. 00:32:12.500 [2024-04-15 22:58:57.153247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.153626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.153637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.501 qpair failed and we were unable to recover it. 00:32:12.501 [2024-04-15 22:58:57.153869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.154186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.154196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.501 qpair failed and we were unable to recover it. 00:32:12.501 [2024-04-15 22:58:57.154552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.154866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.154876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.501 qpair failed and we were unable to recover it. 00:32:12.501 [2024-04-15 22:58:57.155244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.155585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.155595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.501 qpair failed and we were unable to recover it. 00:32:12.501 [2024-04-15 22:58:57.156011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.156402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.156413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.501 qpair failed and we were unable to recover it. 00:32:12.501 [2024-04-15 22:58:57.156688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.157054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.157064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.501 qpair failed and we were unable to recover it. 00:32:12.501 [2024-04-15 22:58:57.157365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.157731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.157742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.501 qpair failed and we were unable to recover it. 00:32:12.501 [2024-04-15 22:58:57.157985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.158374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.158384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.501 qpair failed and we were unable to recover it. 00:32:12.501 [2024-04-15 22:58:57.158771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.159145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.159155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.501 qpair failed and we were unable to recover it. 00:32:12.501 [2024-04-15 22:58:57.159534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.159871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.159881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.501 qpair failed and we were unable to recover it. 00:32:12.501 [2024-04-15 22:58:57.160236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.160578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.160589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.501 qpair failed and we were unable to recover it. 00:32:12.501 [2024-04-15 22:58:57.160948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.161216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.161226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.501 qpair failed and we were unable to recover it. 00:32:12.501 [2024-04-15 22:58:57.161600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.161913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.161924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.501 qpair failed and we were unable to recover it. 00:32:12.501 [2024-04-15 22:58:57.162263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.162656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.162666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.501 qpair failed and we were unable to recover it. 00:32:12.501 [2024-04-15 22:58:57.163035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.163284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.163294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.501 qpair failed and we were unable to recover it. 00:32:12.501 [2024-04-15 22:58:57.163702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.163926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.163936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.501 qpair failed and we were unable to recover it. 00:32:12.501 [2024-04-15 22:58:57.164317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.164712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.164722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.501 qpair failed and we were unable to recover it. 00:32:12.501 [2024-04-15 22:58:57.165030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.165397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.501 [2024-04-15 22:58:57.165408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.165766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.166144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.166154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.166563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.166944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.166955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.167335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.167697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.167707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.168082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.168470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.168482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.168996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.169384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.169396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.169771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.170658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.170682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.170916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.171267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.171278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.171535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.171821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.171831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.172228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.172575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.172586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.173040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.173432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.173442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.173796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.174184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.174195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.174540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.174834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.174845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.175200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.175565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.175576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.175957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.176306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.176316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.176655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.177025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.177035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.177417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.177616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.177627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.177916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.178187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.178197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.178585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.178960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.178972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.179334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.179706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.179717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.180077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.180470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.180480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.180843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.181127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.181139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.181468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.181797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.181808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.182154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.182526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.182537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.182891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.183301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.183312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.503 qpair failed and we were unable to recover it. 00:32:12.503 [2024-04-15 22:58:57.183548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.503 [2024-04-15 22:58:57.183879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.183890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.184094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.184466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.184476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.184841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.185207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.185218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.185636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.186027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.186037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.186391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.186643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.186653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.187004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.187345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.187355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.187587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.187960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.187971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.188353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.188571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.188583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.188860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.189247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.189260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.189649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.190040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.190050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.190404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.190589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.190600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.190961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.191351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.191360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.191709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.192085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.192095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.192473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.192801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.192812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.193156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.193541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.193555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.193951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.194291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.194302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.194588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.194851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.194862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.195233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.195490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.195502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.195846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.196185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.196195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.196566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.196948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.196959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.197263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.197618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.197629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.197958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.198232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.198243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.198591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.198955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.198965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.199347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.199728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.199739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.200112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.200453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.200463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.200820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.201162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.201172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.201522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.201792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.201802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.202202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.202550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.202562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.202811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.203098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.203108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.504 qpair failed and we were unable to recover it. 00:32:12.504 [2024-04-15 22:58:57.203366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.504 [2024-04-15 22:58:57.203680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.203691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.204054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.204264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.204274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.204536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.204886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.204897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.205247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.205476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.205487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.205829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.206170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.206181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.206431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.206808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.206818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.207155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.207496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.207507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.207861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.208118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.208128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.208511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.208836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.208847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.209152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.209466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.209477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.209811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.210083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.210094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.210342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.210608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.210618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.210860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.211202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.211212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.211569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.211829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.211840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.212182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.212491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.212501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.212839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.213182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.213192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.213573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.213917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.213927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.214277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.214576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.214588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.215023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.215291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.215301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.215660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.216042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.216053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.216425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.216810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.216824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.217252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.217551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.217562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.217902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.218257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.218267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.218633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.219015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.219025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.219388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.219754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.219765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.220117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.220350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.220360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.220733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.221078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.221089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.221392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.221744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.221754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.222121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.222513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.505 [2024-04-15 22:58:57.222524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.505 qpair failed and we were unable to recover it. 00:32:12.505 [2024-04-15 22:58:57.222670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.222997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.223008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.223387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.223644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.223664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.223975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.224332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.224342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.224654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.225085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.225095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.225434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.225807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.225818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.226064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.226277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.226288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.226539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.226995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.227006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.227390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.227752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.227762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.228203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.228550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.228562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.228905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.229286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.229297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.229765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.230140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.230154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.230535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.230932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.230943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.231318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.231758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.231797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.232188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.232555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.232567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.232988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.233305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.233316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.233782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.234175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.234189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.234410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.234769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.234780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.235146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.235503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.235513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.235736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.236080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.236091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.236425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.236884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.236895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.237319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.237675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.237686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.238058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.238412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.238423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.238802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.239080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.239091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.239465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.239716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.239726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.240075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.240421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.240432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.240796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.241197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.241208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.241484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.241784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.241796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.242164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.242501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.242512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.242842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.243198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.243210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.506 qpair failed and we were unable to recover it. 00:32:12.506 [2024-04-15 22:58:57.243597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.506 [2024-04-15 22:58:57.243949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.243960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.244326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.244678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.244689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.244984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.245321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.245331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.245561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.245897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.245907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.246269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.246593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.246604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.246874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.247264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.247275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.247574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.247805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.247815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.248185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.248540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.248555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.248891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.249183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.249194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.249365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.249778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.249788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.250159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.250591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.250601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.250952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.251315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.251325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.251585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.252012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.252023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.252230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.252556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.252569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.252775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.253130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.253140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.253519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.253923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.253933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.254286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.254658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.254668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.254933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.255324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.255334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.255705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.256077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.256087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.256320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.256643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.256654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.257026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.257365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.257375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.507 qpair failed and we were unable to recover it. 00:32:12.507 [2024-04-15 22:58:57.257733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.258079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.507 [2024-04-15 22:58:57.258089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.258292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.258658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.258669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.259063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.259452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.259462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.259836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.260227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.260237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.260595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.260936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.260946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.261330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.261653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.261664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.261979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.262320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.262330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.262703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.262935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.262945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.263116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.263500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.263510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.263707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.264038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.264048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.264260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.264647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.264657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.264888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.265161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.265171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.265380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.265738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.265749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.266011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.266356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.266366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.266726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.267076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.267086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.267454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.267852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.267863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.268164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.268556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.268567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.268930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.269195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.269206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.269469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.269831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.269841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.270182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.270527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.270537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.270800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.271142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.271152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.271390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.271678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.271689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.272060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.272364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.272375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.272826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.273188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.273199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.273560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.273951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.273962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.274304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.274581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.274592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.274978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.275325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.275335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.275705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.276073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.276084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.276349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.276633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.276644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.508 [2024-04-15 22:58:57.276985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.277351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.508 [2024-04-15 22:58:57.277362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.508 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.277736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.278096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.278108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.278423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.278715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.278726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.279091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.279438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.279449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.279765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.280135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.280145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.280510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.280746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.280756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.281137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.281383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.281394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.281680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.282052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.282062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.282432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.282768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.282778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.283137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.283342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.283352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.283607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.283988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.283998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.284355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.284621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.284633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.284934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.285301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.285312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.285659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.285892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.285901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.286246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.286598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.286611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.286984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.287369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.287381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.287737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.288111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.288121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.288357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.288703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.288713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.289044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.289402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.289413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.289764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.290149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.290160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.509 [2024-04-15 22:58:57.290306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.290688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-04-15 22:58:57.290699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.509 qpair failed and we were unable to recover it. 00:32:12.780 [2024-04-15 22:58:57.291059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.291447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.291457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.780 qpair failed and we were unable to recover it. 00:32:12.780 [2024-04-15 22:58:57.291686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.292056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.292067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.780 qpair failed and we were unable to recover it. 00:32:12.780 [2024-04-15 22:58:57.292957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.293351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.293364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.780 qpair failed and we were unable to recover it. 00:32:12.780 [2024-04-15 22:58:57.293723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.294093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.294103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.780 qpair failed and we were unable to recover it. 00:32:12.780 [2024-04-15 22:58:57.294462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.294878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.294890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.780 qpair failed and we were unable to recover it. 00:32:12.780 [2024-04-15 22:58:57.295235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.295619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.295630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.780 qpair failed and we were unable to recover it. 00:32:12.780 [2024-04-15 22:58:57.295852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.296183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.296193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.780 qpair failed and we were unable to recover it. 00:32:12.780 [2024-04-15 22:58:57.296413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.296800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.296811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.780 qpair failed and we were unable to recover it. 00:32:12.780 [2024-04-15 22:58:57.297168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.297560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.297572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.780 qpair failed and we were unable to recover it. 00:32:12.780 [2024-04-15 22:58:57.297797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.298031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.298043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.780 qpair failed and we were unable to recover it. 00:32:12.780 [2024-04-15 22:58:57.298404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.298631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.298642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.780 qpair failed and we were unable to recover it. 00:32:12.780 [2024-04-15 22:58:57.299023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.299409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.299420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.780 qpair failed and we were unable to recover it. 00:32:12.780 [2024-04-15 22:58:57.299809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.300156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.300167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.780 qpair failed and we were unable to recover it. 00:32:12.780 [2024-04-15 22:58:57.300526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.300736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.780 [2024-04-15 22:58:57.300750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.780 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.300948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.301312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.301323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.301694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.302075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.302086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.302447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.302800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.302811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.303197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.303586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.303596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.303926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.304319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.304329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.304501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.304827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.304838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.305204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.305591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.305602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.305965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.306352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.306363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.306565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.306883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.306894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.307233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.307621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.307631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.307857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.308094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.308105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.308449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.308809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.308820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.309235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.309459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.309468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.309835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.310223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.310234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.311133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.311527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.311539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.312496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.312933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.312946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.313207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.313560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.313570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.313820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.314209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.314220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.314572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.314968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.314978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.315359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.315709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.315720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.316088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.316414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.316428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.316849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.317235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.317246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.317591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.317949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.317959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.318331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.318760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.318771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.319102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.319462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.319472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.319644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.319898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.319908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.781 qpair failed and we were unable to recover it. 00:32:12.781 [2024-04-15 22:58:57.320272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.781 [2024-04-15 22:58:57.320661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.320671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.321037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.321338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.321349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.321709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.321956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.321966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.322386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.322709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.322720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.323109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.323468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.323481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.323728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.324077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.324088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.324347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.324705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.324716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.325062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.325414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.325424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.325777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.326119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.326129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.326557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.326863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.326875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.327233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.327501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.327512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.327869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.328307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.328317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.328539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.328913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.328924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.329312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.329785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.329824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.330227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.330502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.330514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.330919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.331171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.331182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.331368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.331755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.331765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.332110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.332421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.332431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.332799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.333205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.333215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.333607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.333853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.333864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.334211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.334499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.334509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.334901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.335255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.335265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.335621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.335934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.335944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.336306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.336697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.782 [2024-04-15 22:58:57.336707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.782 qpair failed and we were unable to recover it. 00:32:12.782 [2024-04-15 22:58:57.337083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.337427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.337438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.337823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.338157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.338168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.338528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.338722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.338734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.338940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.339163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.339174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.339554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.339816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.339827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.340214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.340564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.340575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.340976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.341322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.341333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.341701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.342050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.342061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.342422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.342853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.342864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.343206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.343582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.343592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.343948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.344310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.344320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.344693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.345026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.345036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.345286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.345602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.345613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.345987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.346331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.346341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.346700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.346910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.346922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.347246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.347577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.347589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.347936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.348317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.348328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.348690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.349076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.349086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.349440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.349883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.349894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.350223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.350529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.350539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.350812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.351108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.351118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.351437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.351791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.351804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.352160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.352390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.352400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.352749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.353017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.353027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.353388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.353762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.353773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.354115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.354420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.354431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.354800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.355186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.355197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.355420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.355765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.355776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.356029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.356367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.356377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.783 qpair failed and we were unable to recover it. 00:32:12.783 [2024-04-15 22:58:57.356732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.783 [2024-04-15 22:58:57.357068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.357078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.357427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.357793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.357804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.358076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.358286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.358296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.358654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.359021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.359032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.359365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.359595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.359606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.359923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.360286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.360296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.360673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.361047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.361057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.361404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.361658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.361668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.361927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.362298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.362308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.362529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.362703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.362714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.363045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.363384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.363394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.363741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.364129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.364139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.364498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.364828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.364838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.365196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.365546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.365557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.365972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.366313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.366324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.366770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.367168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.367183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.367516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.367884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.367895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.368250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.368639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.368649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.369018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.369372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.369383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.369737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.370126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.370137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.370495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.370862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.370874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.371133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.371520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.371531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.371935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.372314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.372325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.784 qpair failed and we were unable to recover it. 00:32:12.784 [2024-04-15 22:58:57.372406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.784 [2024-04-15 22:58:57.372762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.372774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.785 qpair failed and we were unable to recover it. 00:32:12.785 [2024-04-15 22:58:57.373150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.373476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.373487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.785 qpair failed and we were unable to recover it. 00:32:12.785 [2024-04-15 22:58:57.373846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.374188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.374199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.785 qpair failed and we were unable to recover it. 00:32:12.785 [2024-04-15 22:58:57.374576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.375055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.375067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.785 qpair failed and we were unable to recover it. 00:32:12.785 [2024-04-15 22:58:57.375422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.375681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.375693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.785 qpair failed and we were unable to recover it. 00:32:12.785 [2024-04-15 22:58:57.376040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.376430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.376441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.785 qpair failed and we were unable to recover it. 00:32:12.785 [2024-04-15 22:58:57.376656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.376777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.376788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.785 qpair failed and we were unable to recover it. 00:32:12.785 [2024-04-15 22:58:57.377125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.377469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.377480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.785 qpair failed and we were unable to recover it. 00:32:12.785 [2024-04-15 22:58:57.377724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.378111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.378122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.785 qpair failed and we were unable to recover it. 00:32:12.785 [2024-04-15 22:58:57.378501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.378753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.378764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.785 qpair failed and we were unable to recover it. 00:32:12.785 [2024-04-15 22:58:57.379120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.379422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.379432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.785 qpair failed and we were unable to recover it. 00:32:12.785 [2024-04-15 22:58:57.379792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.380139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.380149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.785 qpair failed and we were unable to recover it. 00:32:12.785 [2024-04-15 22:58:57.380513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.380761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.380772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.785 qpair failed and we were unable to recover it. 00:32:12.785 [2024-04-15 22:58:57.381117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.381455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.381465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.785 qpair failed and we were unable to recover it. 00:32:12.785 [2024-04-15 22:58:57.381863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.382222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.382233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.785 qpair failed and we were unable to recover it. 00:32:12.785 [2024-04-15 22:58:57.382566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.382883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.382894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.785 qpair failed and we were unable to recover it. 00:32:12.785 [2024-04-15 22:58:57.383326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.383701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.383713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.785 qpair failed and we were unable to recover it. 00:32:12.785 [2024-04-15 22:58:57.384064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.384298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.384309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.785 qpair failed and we were unable to recover it. 00:32:12.785 [2024-04-15 22:58:57.384578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.384970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.384980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.785 qpair failed and we were unable to recover it. 00:32:12.785 [2024-04-15 22:58:57.385278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.785 [2024-04-15 22:58:57.385604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.385614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.385994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.386344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.386356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.386633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.387000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.387010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.387258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.387518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.387528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.387937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.388287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.388298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.388621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.388932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.388942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.389298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.389654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.389664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.390083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.390468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.390479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.390843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.391234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.391245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.391605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.392038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.392048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.392390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.392652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.392662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.393030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.393424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.393434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.393868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.394166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.394176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.394550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.394899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.394909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.395291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.395640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.395651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.396025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.396251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.396261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.396496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.396853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.396864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.397218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.397568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.397578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.397935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.398286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.398296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.398655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.398916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.398926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.399144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.399501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.399512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.399764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.400069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.400080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.400340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.400668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.400678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.401038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.401418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.401429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.401790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.402142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.402152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.402511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.402866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.402877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.403128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.403228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.403238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.403593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.403976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.403986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.786 qpair failed and we were unable to recover it. 00:32:12.786 [2024-04-15 22:58:57.404373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.404738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.786 [2024-04-15 22:58:57.404748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.405003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.405390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.405401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.405770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.406181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.406192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.406602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.406876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.406886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.407141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.407528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.407539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.407926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.408274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.408285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.408767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.409117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.409127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.409481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.409896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.409907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.410226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.410612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.410622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.410996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.411372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.411383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.411749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.412141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.412151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.412511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.412881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.412891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.413148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.413540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.413554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.413904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.414142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.414153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.414313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.414642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.414656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.415053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.415426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.415436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.415818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.416162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.416172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.416552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.416891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.416901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.417281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.417667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.417678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.418027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.418411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.418422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.418785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.419141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.419152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.419517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.419703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.419714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.787 qpair failed and we were unable to recover it. 00:32:12.787 [2024-04-15 22:58:57.420061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.420446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.787 [2024-04-15 22:58:57.420457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.420763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.421096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.421107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.421472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.421820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.421834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.422189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.422523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.422535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.422901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.423284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.423295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.423647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.424025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.424035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.424408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.424756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.424767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.425111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.425340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.425350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.425705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.426068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.426078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.426429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.426816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.426826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.427213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.427599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.427609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.427971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.428311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.428322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.428685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.429064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.429074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.429430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.429766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.429777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.429993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.430388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.430399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.430695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.431071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.431082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.431458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.431806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.431817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.432171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.432556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.432568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.432917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.433256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.433267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.433617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.433984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.433995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.434319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.434695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.434706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.435097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.435483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.435494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.435856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.436242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.436253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.436604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.436991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.437002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.437369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.437753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.437762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.438110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.438484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.438493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.438840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.439218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.439227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.439586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.439932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.439942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.440315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.440656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.440667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.788 qpair failed and we were unable to recover it. 00:32:12.788 [2024-04-15 22:58:57.440914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.441300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.788 [2024-04-15 22:58:57.441313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.441650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.442022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.442033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.442385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.442614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.442625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.442977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.443316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.443327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.443699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.444065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.444076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.444451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.444735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.444746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.445134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.445522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.445533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.445907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.446293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.446304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.446650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.447028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.447040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.447418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.447645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.447657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.448037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.448424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.448435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.448810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.449040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.449051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.449409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.449794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.449806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.450215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.450554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.450566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.450920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.451305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.451318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.451658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.452000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.452012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.452229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.452615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.452626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.453005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.453386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.453397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.453752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.454137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.454149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.454492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.454868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.454880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.455237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.455624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.455635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.456005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.456172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.456184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.456538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.456872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.456883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.457291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.457561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.457572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.789 qpair failed and we were unable to recover it. 00:32:12.789 [2024-04-15 22:58:57.457931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.789 [2024-04-15 22:58:57.458311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.458322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.458687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.459042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.459053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.459356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.459720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.459731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.459977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.460324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.460334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.460687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.461028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.461038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.461452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.461716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.461727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.462082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.462420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.462431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.462783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.463176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.463187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.463541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.463909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.463920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.464296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.464675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.464686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.465056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.465402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.465412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.465743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.466087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.466097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.466450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.466725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.466736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.467112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.467493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.467503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.467883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.468271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.468282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.468641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.468949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.468960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.469346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.469684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.469695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.470015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.470337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.470347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.470653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.471025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.471035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.471407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.471785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.471795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.472146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.472464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.472475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.472840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.473182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.473192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.790 [2024-04-15 22:58:57.473546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.473871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.790 [2024-04-15 22:58:57.473883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.790 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.474255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.474546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.474558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.474788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.475172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.475183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.475555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.475857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.475868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.476220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.476593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.476603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.476996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.477380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.477390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.477662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.478052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.478063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.478438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.478762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.478774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.479079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.479440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.479451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.479806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.480149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.480160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.480415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.480778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.480789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.481168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.481440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.481452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.481803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.482143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.482154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.482527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.482908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.482919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.483272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.483527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.483538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.483917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.484299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.484311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.485094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.485372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.485386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.485872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.486257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.486271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.486783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.487238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.487252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.487471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.487888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.487903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.488272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.488594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.488606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.488969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.489325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.489336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.489694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.489886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.489895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.490140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.490377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.490387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.490746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.491126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.491136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.491504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.491853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.491863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.492217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.492563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.492574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.492925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.493313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.493323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.493670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.494032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.494042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.791 [2024-04-15 22:58:57.494413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.494706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.791 [2024-04-15 22:58:57.494718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.791 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.495092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.495361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.495371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.495617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.495987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.495997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.496352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.496707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.496718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.497114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.497458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.497468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.497691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.498028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.498038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.498414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.498768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.498779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.499132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.499520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.499530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.499806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.500180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.500190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.500548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.500946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.500956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.501335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.501763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.501802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.502183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.502479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.502489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.502849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.503086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.503097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.503451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.503799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.503810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.504170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.504514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.504525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.504890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.505238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.505249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.505600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.506040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.506052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.506453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.506694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.506704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.507090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.507465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.507475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.507840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.508166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.508176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.508517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.508808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.508818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.792 [2024-04-15 22:58:57.509176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.509561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.792 [2024-04-15 22:58:57.509572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.792 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.509956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.510336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.510347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.510704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.511051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.511061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.511361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.511629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.511640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.511888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.512236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.512247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.512624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.512952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.512962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.513335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.513687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.513698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.514100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.514332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.514342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.514704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.515052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.515063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.515408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.515636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.515647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.515947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.516209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.516221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.516556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.516814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.516824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.517168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.517501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.517511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.517859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.518107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.518118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.518477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.518819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.518829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.519207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.519549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.519560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.519971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.520238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.520249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.520612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.520969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.520979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.521240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.521589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.521600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.521983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.522275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.522286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.522635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.522947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.522957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.523313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.523623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.523634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.524010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.524398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.524408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.524784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.525175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.525186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.525503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.525760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.525770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.526146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.526518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.526528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.526800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.527178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.527188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.527578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.528041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.528051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.528412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.528641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.528652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.528984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.529200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.793 [2024-04-15 22:58:57.529211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.793 qpair failed and we were unable to recover it. 00:32:12.793 [2024-04-15 22:58:57.529617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.529982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.529992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.530370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.530667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.530678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.531036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.531380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.531390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.531710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.532089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.532099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.532457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.532894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.532904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.533289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.533622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.533633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.533987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.534333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.534344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.534619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.534877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.534888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.535245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.535476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.535486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.535895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.536140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.536150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.536507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.536837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.536848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.537048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.537378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.537388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.537747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.537967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.537977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.538363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.538640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.538651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.538870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.539220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.539231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.539580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.539968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.539978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.540288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.540653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.540664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.540963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.541338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.541348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.541704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.542079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.542089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.542347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.542664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.542675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.543044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.543249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.543260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.543613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.543888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.543900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.544243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.544557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.544568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.544982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.545322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.545332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.794 [2024-04-15 22:58:57.545613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.545948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.794 [2024-04-15 22:58:57.545958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.794 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.546260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.546610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.546620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.546978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.547367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.547378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.547636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.548030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.548040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.548406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.548763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.548773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.549156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.549481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.549491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.549821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.550165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.550176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.550550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.550940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.550952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.551311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.551797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.551836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.552211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.552528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.552539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.552799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.553207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.553217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.553565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.553820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.553830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.554183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.554529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.554539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.554913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.555308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.555318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.555670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.556067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.556078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.556424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.556843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.556854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.557216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.557551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.557563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.557928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.558262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.558272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.558633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.559028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.559039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.559395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.559787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.559798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.560023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.560389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.560399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.560768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.561087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.561098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.795 qpair failed and we were unable to recover it. 00:32:12.795 [2024-04-15 22:58:57.561445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.795 [2024-04-15 22:58:57.561812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.561823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.562201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.562592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.562602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.562956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.563329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.563339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.563708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.564082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.564092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.564302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.564582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.564594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.564958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.565300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.565310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.565660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.565999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.566009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.566441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.566817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.566828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.567175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.567566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.567577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.567907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.568245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.568255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.568619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.568997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.569007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.569388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.569750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.569761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.570172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.570566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.570577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.570922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.571225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.571236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.571549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.571864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.571874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.572254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.572637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.572648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.573017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.573405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.573416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.573784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.574162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.574172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.574527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.574918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.574928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.575299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.575648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.575658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.576076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.576412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.576423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.576643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.576912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.576922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.577276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.577663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.577674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.578077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.578463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.578474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.578830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.579170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.579181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.579557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.579903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.579914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:12.796 [2024-04-15 22:58:57.580220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.580511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.796 [2024-04-15 22:58:57.580522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:12.796 qpair failed and we were unable to recover it. 00:32:13.066 [2024-04-15 22:58:57.580881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.066 [2024-04-15 22:58:57.581270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.066 [2024-04-15 22:58:57.581281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.581635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.581903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.581913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.582289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.582623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.582634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.583007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.583391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.583401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.583752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.584136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.584146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.584500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.584835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.584845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.585163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.585541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.585560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.585890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.586100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.586110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.586459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.586843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.586853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.587208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.587596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.587612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.587990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.588379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.588389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.588736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.589121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.589132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.589508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.589888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.589899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.590244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.590593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.590604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.591018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.591359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.591369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.591717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.591949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.591959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.592150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.592502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.592513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.592892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.593276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.593286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.593660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.594016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.594026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.594253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.594596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.594607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.594976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.595352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.595362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.595708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.595932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.595942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.596146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.596525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.596535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.596799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.597023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.597033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.597412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.597767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.597778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.598156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.598547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.598558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.598866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.599109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.599119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.599496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.599857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.599868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.600212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.600558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.600568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.600923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.601273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.601283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.601540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.601859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.601870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.602106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.602467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.602477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.602708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.602947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.602957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.603313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.603623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.603635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.604045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.604387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.604397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.604747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.604981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.604991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.605333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.605699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.605709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.606073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.606418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.606429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.606772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.607180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.607190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.607549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.607905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.607916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.608186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.608501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.608512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.608909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.609243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.609254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.609637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.609896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.609907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.610262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.610679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.610689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.611017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.611345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.067 [2024-04-15 22:58:57.611355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.067 qpair failed and we were unable to recover it. 00:32:13.067 [2024-04-15 22:58:57.611730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.612072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.612083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.612431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.612749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.612760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.613001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.613341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.613351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.613711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.614082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.614092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.614481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.614896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.614907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.615279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.615627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.615640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.616028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.616391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.616402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.616772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.617153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.617163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.617515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.617872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.617882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.618196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.618348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.618358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.618694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.619049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.619059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.619246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.619556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.619567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.619939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.620216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.620226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.620576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.620979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.620989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.621351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.621674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.621685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.622065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.622274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.622285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.622593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.622919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.622929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.623122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.623430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.623440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.623794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.624169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.624179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.624531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.624744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.624755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.625126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.625425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.625436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.625780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.626157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.626167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.626460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.626803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.626813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.627192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.627587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.627597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.627951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.628245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.628256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.628597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.628966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.628976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.629365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.629683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.629694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.629956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.630315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.630325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.630575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.630943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.630953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.631317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.631621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.631631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.631961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.632264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.632274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.632611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.632881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.632891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.633145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.633438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.633448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.633654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.634032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.634043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.634415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.634735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.634745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.635099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.635443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.635453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.635641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.635916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.635926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.636215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.636579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.636589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.636890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.637232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.637242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.637573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.637885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.637896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.638250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.638500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.638511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.638777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.639119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.639130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.639388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.639733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.639744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.640068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.640305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.640316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.640583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.640976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.640987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.641370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.641690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.641701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.642082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.642291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.642301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.642547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.642898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.642908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.643246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.643514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.643525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.643878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.644168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.644179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.644446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.644805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.644815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.645185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.645530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.645541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.645914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.646257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.646267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.646621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.646977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.646988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.647340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.647699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.647710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.648080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.648468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.068 [2024-04-15 22:58:57.648478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.068 qpair failed and we were unable to recover it. 00:32:13.068 [2024-04-15 22:58:57.648704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.649072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.649084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.649427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.649732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.649743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.650097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.650441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.650452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.650814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.651175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.651185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.651537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.651923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.651934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.652273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.652615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.652626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.652958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.653222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.653232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.653600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.653971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.653982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.654332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.654664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.654675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.655020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.655382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.655393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.655634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.655965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.655976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.656357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.656722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.656733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.657033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.657417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.657428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.657889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.658236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.658247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.658459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.658817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.658828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.659156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.659407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.659417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.659818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.660199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.660209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.660583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.660943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.660952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.661309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.661699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.661710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.662066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.662422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.662433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.662823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.663192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.663203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.663483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.663763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.663774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.664133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.664488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.664499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.664851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.665250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.665261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.665643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.665909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.665920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.666260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.666604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.666614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.667018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.667366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.667376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.667826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.668047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.668057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.668428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.668799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.668810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.669187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.669567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.669578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.669952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.670297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.670307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.670561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.670925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.670936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.671141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.671448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.671458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.671703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.672099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.672109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.672421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.672780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.672790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.673169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.673468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.673478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.673815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.674034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.674044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.674421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.674843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.674854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.675210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.675604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.675614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.675887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.676204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.676213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.676568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.676843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.676853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.677160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.677572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.677583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.677879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.678271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.678281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.678662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.679020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.679030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.679391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.679723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.679734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.680068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.680450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.680460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.680769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.681117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.681127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.681508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.681870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.069 [2024-04-15 22:58:57.681880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.069 qpair failed and we were unable to recover it. 00:32:13.069 [2024-04-15 22:58:57.682235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.682583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.682593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.682984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.683328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.683338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.683745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.684129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.684140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.684516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.684811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.684823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.685179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.685579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.685590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.685859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.686228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.686238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.686595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.686980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.686991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.687367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.687732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.687744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.688111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.688334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.688344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.688720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.689069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.689079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.689503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.689840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.689851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.690227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.690420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.690431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.690689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.690921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.690931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.691317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.691627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.691638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.692009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.692402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.692412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.692783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.693164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.693173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.693481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.693835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.693846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.694194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.694583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.694593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.694965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.695270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.695280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.695642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.696030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.696040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.696402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.696637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.696647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.696909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.697255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.697265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.697618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.697968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.697979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.698355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.698625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.698636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.698977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.699244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.699255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.699561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.699924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.699934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.700192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.700536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.700549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.700793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.701173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.701183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.701388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.701785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.701796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.702170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.702430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.702441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.702908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.703247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.703260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.703636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.703854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.703865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.704110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.704370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.704380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.704740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.705116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.705127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.705488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.705810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.705820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.706192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.706536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.706550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.706924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.707189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.707200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.707564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.707989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.707999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.708378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.708739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.708749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.709151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.709506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.709516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.709880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.710252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.710263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.710678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.711035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.711046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.711400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.711780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.711791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.712142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.712521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.712532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.712874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.713270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.713283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.713776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.714177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.714191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.714550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.714905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.714915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.715248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.715476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.715486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.715762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.716095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.716106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.716554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.716905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.716915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.717227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.717570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.717581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.718045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.718388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.718399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.718655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.718898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.718909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.719240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.719562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.719573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.719945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.720214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.720226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.720596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.720926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.720937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.070 qpair failed and we were unable to recover it. 00:32:13.070 [2024-04-15 22:58:57.721275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.070 [2024-04-15 22:58:57.721619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.721630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.721961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.722291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.722302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.722619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.722949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.722959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.723217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.723555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.723566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.723945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.724291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.724301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.724556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.724797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.724807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.725197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.725540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.725555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.725910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.726122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.726132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.726479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.726715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.726725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.727115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.727422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.727433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.727659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.727957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.727967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.728349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.728613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.728623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.728991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.729330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.729341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.729620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.730048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.730059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.730320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.730664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.730675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.730964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.731329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.731340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.731606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.732037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.732047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.732497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.732790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.732800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.733106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.733482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.733492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.733852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.734189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.734200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.734556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.734820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.734830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.735151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.735448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.735458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.735852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.736238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.736249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.736627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.737057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.737067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.737433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.737623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.737633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.738049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.738269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.738279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.738634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.738864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.738874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.739243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.739623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.739634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.739951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.740330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.740340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.740641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.740926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.740936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.741243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.741637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.741647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.741977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.742255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.742265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.742638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.743136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.743146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.743397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.743799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.743810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.744165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.744554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.744565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.744941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.745175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.745185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.745623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.745931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.745941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.746323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.746546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.746556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.746966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.747268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.747279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.747673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.748057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.748069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.748356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.748752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.748762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.749160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.749429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.749441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.749848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.750167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.750178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.750585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.750827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.750837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.751190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.751455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.751466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.751821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.752213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.752224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.752653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.753030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.753041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.753425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.753766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.753776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.754142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.754398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.754408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.754677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.755033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.755043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.755425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.755801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.755812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.756053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.756282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.756292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.071 qpair failed and we were unable to recover it. 00:32:13.071 [2024-04-15 22:58:57.756661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.071 [2024-04-15 22:58:57.757060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.757071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.757198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.757458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.757468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.757904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.758248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.758260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.758638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.759060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.759070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.759369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.759603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.759613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.759896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.760115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.760126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.760352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.760638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.760649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.760898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.761249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.761260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.761636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.761941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.761952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.762363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.762640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.762651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.763011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.763359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.763369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.763733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.764119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.764129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.764485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.764845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.764855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.765199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.765549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.765559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.765910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.766215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.766225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.766578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.766953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.766963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.767317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.767583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.767593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.767892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.768217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.768227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.768583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.768912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.768923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.769303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.769689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.769699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.770052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.770414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.770424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.770732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.771083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.771094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.771379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.771637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.771647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.772006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.772350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.772360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.772824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.773214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.773225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.773573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.773847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.773857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.774220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.774609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.774619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.774977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.775257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.775268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.775644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.776032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.776042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.776383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.776661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.776672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.776887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.777243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.777253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.777539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.777838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.777848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.778202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.778526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.778536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.778928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.779311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.779322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.779596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.779841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.779852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.780188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.780529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.780539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.780814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.781115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.781125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.781366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.781679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.781689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.782059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.782195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.782207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.782579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.782951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.782961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.783227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.783590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.783601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.784005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.784342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.784352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.784703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.785104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.785114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.785448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.785788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.785799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.786183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.786509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.786519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.786794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.787105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.787115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.787465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.787699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.787710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.788058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.788401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.788411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.788765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.789109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.789120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.789357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.789661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.789671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.790055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.790326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.790337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.790647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.790976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.790986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.791334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.791572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.791582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.791910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.792228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.792238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.072 qpair failed and we were unable to recover it. 00:32:13.072 [2024-04-15 22:58:57.792588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.072 [2024-04-15 22:58:57.792966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.792975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.793309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.793578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.793588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.793921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.794264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.794274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.794626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.795016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.795026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.795321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.795684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.795694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.796029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.796316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.796326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.796658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.796905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.796915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.797279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.797578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.797589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.797951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.798286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.798296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.798671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.799057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.799067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.799445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.799677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.799688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.800017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.800238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.800248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.800560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.800827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.800837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.801218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.801558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.801569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.801894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.802175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.802185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.802553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.802885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.802896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.803240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.803462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.803471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.803805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.804061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.804071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.804427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.804635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.804645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.805006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.805226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.805236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.805593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.805905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.805915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.806284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.806618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.806629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.807008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.807351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.807362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.807702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.808089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.808099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.808402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.808675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.808685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.809015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.809358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.809370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.809720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.809987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.809996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.810366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.810716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.810727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.811080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.811460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.811471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.811809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.812024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.812036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.812388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.812716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.812727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.813078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.813420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.813431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.813811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.814187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.814198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.814567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.814937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.814947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.815253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.815506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.815516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.815897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.816108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.816119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.816456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.816799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.816810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.817174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.817512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.817523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.817892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.818248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.818259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.818623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.819021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.819032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.819399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.819816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.819827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.820191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.820587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.820598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.820951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.821204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.821214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.821569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.821930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.821940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.822260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.822572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.822582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.822941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.823288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.823298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.823655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.824040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.824051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.824396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.824750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.824760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.825172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.825549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.825560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.825898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.826250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.826260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.826504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.826887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.826898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.827274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.827583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.827593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.073 [2024-04-15 22:58:57.827899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.828242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.073 [2024-04-15 22:58:57.828252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.073 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.828633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.828991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.829002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.829239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.829583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.829593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.829952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.830291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.830301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.830660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.830918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.830928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.831264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.831645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.831655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.831986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.832362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.832372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.832499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.832826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.832837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.833190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.833500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.833510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.833882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.834198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.834209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.834611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.834894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.834905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.835279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.835657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.835668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.836046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.836389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.836399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.836769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.837034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.837044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.837402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.837658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.837668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.837994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.838377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.838387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.838669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.838974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.838984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.839362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.839632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.839643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.839994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.840221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.840230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.840451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.840796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.840807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.841164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.841539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.841554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.841925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.842205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.842216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.842549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.842907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.842917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.843299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.843645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.843655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.844011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.844356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.844368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.844723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.845102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.845112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.845472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.845868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.845878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.846201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.846433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.846443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.846656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.846931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.846942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.847114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.847361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.847372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.847805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.848195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.848205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.848597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.849041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.849051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.849467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.849630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.849640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.849995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.850317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.850327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.850592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.850872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.850883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.851264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.851610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.851621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.851951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.852307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.852316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.852656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.852958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.852969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.853212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.853591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.853601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.853882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.854157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.854167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.854504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.854878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.854888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.855192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.855462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.855472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.855674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.856031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.856041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.856386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.856760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.856771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.857117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.857461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.857471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.857828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.858103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.858114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.858445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.858795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.858806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.859151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.859386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.859396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.859745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.860048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.860059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.860303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.860700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.074 [2024-04-15 22:58:57.860710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.074 qpair failed and we were unable to recover it. 00:32:13.074 [2024-04-15 22:58:57.861086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.075 [2024-04-15 22:58:57.861431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.075 [2024-04-15 22:58:57.861441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.075 qpair failed and we were unable to recover it. 00:32:13.075 [2024-04-15 22:58:57.861857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.075 [2024-04-15 22:58:57.862194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.075 [2024-04-15 22:58:57.862204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.075 qpair failed and we were unable to recover it. 00:32:13.075 [2024-04-15 22:58:57.862554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.075 [2024-04-15 22:58:57.862844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.075 [2024-04-15 22:58:57.862854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.075 qpair failed and we were unable to recover it. 00:32:13.075 [2024-04-15 22:58:57.863026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.075 [2024-04-15 22:58:57.863336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.075 [2024-04-15 22:58:57.863346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.075 qpair failed and we were unable to recover it. 00:32:13.075 [2024-04-15 22:58:57.863709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.075 [2024-04-15 22:58:57.864090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.075 [2024-04-15 22:58:57.864100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.075 qpair failed and we were unable to recover it. 00:32:13.075 [2024-04-15 22:58:57.864351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.075 [2024-04-15 22:58:57.864683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.075 [2024-04-15 22:58:57.864694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.075 qpair failed and we were unable to recover it. 00:32:13.075 [2024-04-15 22:58:57.864996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.075 [2024-04-15 22:58:57.865209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.075 [2024-04-15 22:58:57.865220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.075 qpair failed and we were unable to recover it. 00:32:13.075 [2024-04-15 22:58:57.865600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.075 [2024-04-15 22:58:57.865977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.075 [2024-04-15 22:58:57.865987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.075 qpair failed and we were unable to recover it. 00:32:13.075 [2024-04-15 22:58:57.866361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.075 [2024-04-15 22:58:57.866639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.075 [2024-04-15 22:58:57.866649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.075 qpair failed and we were unable to recover it. 00:32:13.344 [2024-04-15 22:58:57.867044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.867386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.867396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-04-15 22:58:57.867749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.868063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.868073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-04-15 22:58:57.868412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.868760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.868771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-04-15 22:58:57.869153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.869384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.869395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-04-15 22:58:57.869756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.869980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.869991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-04-15 22:58:57.870340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.870654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.870665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-04-15 22:58:57.871036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.871334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.871344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-04-15 22:58:57.871746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.872089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.872099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-04-15 22:58:57.872473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.872801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.872811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-04-15 22:58:57.873160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.873437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.873448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-04-15 22:58:57.873847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.874142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.874153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-04-15 22:58:57.874413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.874765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.874776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-04-15 22:58:57.875106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.875379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.875390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-04-15 22:58:57.875635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.876020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.876031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-04-15 22:58:57.876403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.876718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.876728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-04-15 22:58:57.877146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.877417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.877428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-04-15 22:58:57.877803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.878148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.878161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-04-15 22:58:57.878396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.878768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.878779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-04-15 22:58:57.879127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.879498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-04-15 22:58:57.879508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.879783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.880167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.880177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.880549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.880902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.880912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.881269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.881617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.881627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.882015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.882330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.882341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.882700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.883082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.883092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.883433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.883760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.883770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.884126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.884478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.884488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.884687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.884931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.884941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.885229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.885675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.885686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.886084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.886475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.886485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.886705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.887083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.887093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.887472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.887864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.887875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.888231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.888442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.888453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.888815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.889034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.889044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.889272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.889532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.889555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.890013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.890403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.890413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.890668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.890876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.890887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.891255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.891655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.891666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.892042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.892424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.892435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.892815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.893191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.893202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.893405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.893767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.893778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.894144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.894377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.894387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.894647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.895002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.895012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.895234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.895576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.895587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.895838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.896184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.896194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.896589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.896714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.896724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-04-15 22:58:57.897040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.897403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-04-15 22:58:57.897414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.897621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.897971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.897982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.898330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.898719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.898730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.899142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.899461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.899472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.899829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.900126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.900136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.900516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.900930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.900941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.901291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.901681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.901692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.902119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.902211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.902221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.902583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.902963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.902973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.903322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.903706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.903717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.904017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.904418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.904429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.904803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.905197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.905207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.905561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.905896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.905909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.906251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.906474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.906485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.906841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.907217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.907229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.907608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.907893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.907904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.908257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.908652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.908662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.908937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.909314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.909327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.909897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.910251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.910263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.910648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.911037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.911047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.911390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.911714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.911726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.912036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.912285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.912295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.912640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.912967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.912977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.913336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.913606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.913617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.914012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.914378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.914388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.914760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.915147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.915157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.915510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.915837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.915847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.916223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.916384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.916395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-04-15 22:58:57.916648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.917059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-04-15 22:58:57.917070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.917321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.917567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.917578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.917951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.918290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.918301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.918613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.918984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.918994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.919336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.919671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.919682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.920060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.920439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.920450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.920677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.921038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.921049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.921454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.921800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.921812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.922160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.922502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.922513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.922739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.923128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.923139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.923500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.923857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.923869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.924205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.924589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.924600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.924862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.925133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.925144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.925450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.925811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.925823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.926174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.926555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.926566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.926955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.927341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.927351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.927711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.928098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.928109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.928481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.928658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.928668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.929058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.929394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.929404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.929753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.930139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.930149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.930498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.930820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.930831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.931160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.931503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.931514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.931861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.932174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.932185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.932518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.932879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.932890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.933232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.933618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.933628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.934019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.934363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.934373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.934726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.935065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.935075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.935448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.935797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.935808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-04-15 22:58:57.936160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-04-15 22:58:57.936499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.936510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.936865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.937252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.937262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.937567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.937917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.937927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.938291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.938627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.938638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.938989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.939326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.939337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.939709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.940096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.940106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.940462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.940827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.940838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.941209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.941549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.941561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.941913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.942299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.942309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.942685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.943038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.943049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.943402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.943789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.943800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.944170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.944551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.944562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.944914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.945296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.945307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.945681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.946047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.946057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.946410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.946785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.946796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.947169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.947553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.947564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.947914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.948300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.948310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.948694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.949038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.949048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.949358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.949727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.949738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.950049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.950414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.950424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.950763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.951150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.951160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.951538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.951859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.951870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.952173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.952533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.952547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.952902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.953245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.953255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.953685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.954060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.954071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.954445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.954830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.954841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.955145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.955474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.955484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.348 qpair failed and we were unable to recover it. 00:32:13.348 [2024-04-15 22:58:57.955833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.956178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-04-15 22:58:57.956188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.956540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.956887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.956898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.957271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.957654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.957665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.958016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.958309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.958320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.958598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.958985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.958995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.959276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.959642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.959653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.960026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.960368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.960379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.960733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.961094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.961104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.961477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.961812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.961823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.962166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.962552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.962562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.962907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.963127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.963137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.963451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.963818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.963829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.964174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.964545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.964556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.964893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.965278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.965287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.965665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.966016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.966026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.966330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.966715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.966725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.967105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.967443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.967453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.967826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.968212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.968222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.968596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.968938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.968948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.969302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.969644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.969654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.970026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.970368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.970378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.970732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.971072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.971083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.971475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.971859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.971870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.972221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.972605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.972616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.972990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.973376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-04-15 22:58:57.973387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.349 qpair failed and we were unable to recover it. 00:32:13.349 [2024-04-15 22:58:57.973740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.974088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.974099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.974477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.974819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.974830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.975184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.975569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.975580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.975955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.976297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.976307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.976660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.977050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.977060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.977434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.977795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.977806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.978161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.978548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.978561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.978839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.979223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.979234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.979585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.979967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.979978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.980320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.980693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.980704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.981131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.981468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.981478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.981859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.982210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.982220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.982570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.982878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.982888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.983263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.983645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.983655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.984006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.984304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.984315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.984689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.985073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.985083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.985508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.985846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.985857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.986235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.986616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.986627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.986932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.987277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.987287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.987659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.987894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.987904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.988209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.988592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.988602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.988943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.989257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.989267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.989571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.989880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.989890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.990285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.990621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.990631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.990936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.991307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.991317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.991691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.992077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.992087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.992439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.992797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.992808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.993181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.993567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.993578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.993922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.994295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.994305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.350 qpair failed and we were unable to recover it. 00:32:13.350 [2024-04-15 22:58:57.994678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.350 [2024-04-15 22:58:57.994916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:57.994926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:57.995321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:57.995652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:57.995662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:57.996004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:57.996359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:57.996369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:57.996722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:57.997107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:57.997117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:57.997494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:57.997822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:57.997833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:57.998177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:57.998562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:57.998573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:57.998943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:57.999278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:57.999289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:57.999645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.000018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.000028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.000398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.000731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.000741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.001043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.001401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.001411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.001550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.001885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.001895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.002247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.002631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.002641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.003016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.003361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.003372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.003722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.004104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.004114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.004476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.004855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.004866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.005220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.005563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.005574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.005949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.006302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.006312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.006670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.007023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.007033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.007407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.007749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.007761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.007981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.008365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.008375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.008722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.009108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.009118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.009468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.009807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.009817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.010190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.010579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.010589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.010942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.011281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.011291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.011665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.012007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.012018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.012367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.012773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.012784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.013122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.013463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.013473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.013827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.014211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.351 [2024-04-15 22:58:58.014221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.351 qpair failed and we were unable to recover it. 00:32:13.351 [2024-04-15 22:58:58.014558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.014941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.014952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.015309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.015585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.015595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.015941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.016327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.016336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.016693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.017074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.017084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.017459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.017815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.017826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.018174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.018406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.018417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.018794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.019179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.019191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.019555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.019907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.019918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.020324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.020733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.020771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.021174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.021552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.021564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.021944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.022329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.022339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.022785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.023163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.023177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.023517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.023867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.023879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.024256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.024717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.024755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.025129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.025519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.025530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.025837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.026035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.026048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.026374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.026629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.026641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.026997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.027379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.027389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.027766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.028153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.028164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.028517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.028900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.028911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.029161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.029503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.029512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.029857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.030241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.030252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.030621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.030961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.030972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.031324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.031665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.031676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.032024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.032408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.032418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.032785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.033170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.033181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.033557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.033946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.033956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.034241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.034614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.034625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.352 qpair failed and we were unable to recover it. 00:32:13.352 [2024-04-15 22:58:58.034963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.352 [2024-04-15 22:58:58.035192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.035204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.035558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.035940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.035950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.036281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.036638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.036648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.037074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.037414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.037424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.037806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.038147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.038157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.038516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.038871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.038882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.039253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.039641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.039653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.040006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.040347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.040358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.040745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.041014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.041025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.041336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.041656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.041667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.042002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.042363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.042373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.042725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.043106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.043117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.043489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.043878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.043889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.044238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.044594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.044607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.044948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.045334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.045344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.045698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.045967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.045978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.046326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.046712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.046723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.047080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.047413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.047423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.047796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.048179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.048189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.048540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.048855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.048865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.049178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.049541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.049556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.049908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.050251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.050262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.050639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.050984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.050994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.051155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.051509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.353 [2024-04-15 22:58:58.051520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.353 qpair failed and we were unable to recover it. 00:32:13.353 [2024-04-15 22:58:58.051894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.052175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.052186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.052538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.052905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.052916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.053289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.053678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.053688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.054040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.054420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.054430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.054779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.055164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.055173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.055526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.055891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.055902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.056272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.056617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.056627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.057055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.057323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.057334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.057695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.058078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.058088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.058441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.058784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.058795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.059052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.059417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.059427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.059763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.060146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.060157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.060360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.060694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.060704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.061058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.061405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.061415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.061791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.062136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.062147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.062501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.062594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.062607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.062961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.063269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.063279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.063632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.064011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.064021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.064327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.064538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.064553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.064905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.065287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.065298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.065664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.066050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.066062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.066420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.066794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.066805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.067180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.067561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.067573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.067924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.068251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.068260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.068590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.068846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.068856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.069200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.069532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.069548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.069786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.070078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.070088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.070441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.070795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.070806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.071179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.071566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.354 [2024-04-15 22:58:58.071576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.354 qpair failed and we were unable to recover it. 00:32:13.354 [2024-04-15 22:58:58.071927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.072311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.072322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.072704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.073060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.073071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.073288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.073590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.073601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.073851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.074221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.074231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.074587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.074856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.074866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.075206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.075552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.075563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.075920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.076305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.076316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.076753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.077052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.077062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.077411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.077727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.077738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.078055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.078390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.078401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.078756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.079140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.079151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.079525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.079907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.079919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.080252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.080636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.080647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.081024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.081407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.081417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.081827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.082214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.082224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.082587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.082926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.082936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.083291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.083635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.083645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.083980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.084251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.084262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.084619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.084955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.084965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.085336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.085723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.085734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.086087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.086476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.086486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.086864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.087209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.087219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.087564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.087788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.087798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.088139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.088479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.088489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.355 qpair failed and we were unable to recover it. 00:32:13.355 [2024-04-15 22:58:58.088831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.089058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.355 [2024-04-15 22:58:58.089069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.089441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.089771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.089781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.090116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.090461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.090473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.090818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.091162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.091174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.091373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.091728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.091738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.092152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.092548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.092558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.092881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.093267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.093277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.093684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.093993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.094004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.094300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.094558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.094568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.094734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.095067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.095078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.095395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.095771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.095782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.096135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.096480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.096491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.096863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.097183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.097194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.097534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.097893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.097904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.098257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.098527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.098537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.098905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.099288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.099298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.099651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.099994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.100004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.100375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.100728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.100739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.101090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.101482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.101493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.101843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.102236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.102247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.102468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.102839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.102851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.103222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.103606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.103616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.103975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.104339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.104349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.104700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.105086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.105098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.105449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.105792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.105802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.106174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.106558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.106569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.106814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.107128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.107138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.107478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.107744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.107754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.356 [2024-04-15 22:58:58.108121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.108506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.356 [2024-04-15 22:58:58.108518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.356 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.108889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.109278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.109289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.109640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.110026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.110036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.110411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.110687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.110698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.111053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.111436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.111447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.111793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.112179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.112190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.112551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.112905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.112916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.113289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.113673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.113684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.114034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.114421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.114432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.114778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.115095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.115106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.115466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.115851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.115863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.116271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.116611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.116622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.116908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.117277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.117287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.117664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.118006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.118016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.118362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.118701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.118712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.119096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.119479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.119490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.119832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.120173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.120183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.120555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.120778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.120790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.121222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.121562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.121573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.121945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.122330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.122340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.122698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.122997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.123008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.123317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.123478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.123489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.123808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.124152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.124162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.124533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.124890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.124901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.125251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.125634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.125645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.126018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.126402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.126412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.126755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.127133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.127144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.357 qpair failed and we were unable to recover it. 00:32:13.357 [2024-04-15 22:58:58.127521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.357 [2024-04-15 22:58:58.127789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.127800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.128151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.128535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.128549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.128889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.129189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.129199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.129551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.129906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.129916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.130291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.130675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.130685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.131036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.131258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.131269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.131418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.131751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.131762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.132112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.132457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.132467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.132825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.133211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.133222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.133578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.133929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.133939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.134318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.134647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.134658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.135030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.135414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.135424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.135837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.136175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.136186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.136442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.136821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.136832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.137205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.137554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.137565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.137922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.138312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.138323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.138646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.139025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.139035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.139385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.139771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.139782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.140153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.140449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.140459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.140834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.141217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.141227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.141603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.141942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.141952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.142303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.142642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.142653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.143006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.143346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.143357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.143573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.143829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.143839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.358 qpair failed and we were unable to recover it. 00:32:13.358 [2024-04-15 22:58:58.144204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.144549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.358 [2024-04-15 22:58:58.144561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.359 qpair failed and we were unable to recover it. 00:32:13.627 [2024-04-15 22:58:58.144908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-04-15 22:58:58.145294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-04-15 22:58:58.145305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.627 qpair failed and we were unable to recover it. 00:32:13.627 [2024-04-15 22:58:58.145673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-04-15 22:58:58.145865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-04-15 22:58:58.145876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.627 qpair failed and we were unable to recover it. 00:32:13.627 [2024-04-15 22:58:58.146233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-04-15 22:58:58.146585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-04-15 22:58:58.146597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.627 qpair failed and we were unable to recover it. 00:32:13.627 [2024-04-15 22:58:58.146947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-04-15 22:58:58.147335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-04-15 22:58:58.147345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.627 qpair failed and we were unable to recover it. 00:32:13.627 [2024-04-15 22:58:58.147603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-04-15 22:58:58.147944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-04-15 22:58:58.147954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.627 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.148327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.148685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.148695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.149065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.149407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.149417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.149762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.150104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.150114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.150469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.150846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.150857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.151234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.151618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.151629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.152000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.152387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.152396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.152744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.153094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.153104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.153459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.153758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.153769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.154151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.154474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.154485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.154840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.155225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.155236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.155615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.156004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.156014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.156367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.156714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.156724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.157108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.157448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.157458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.157804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.158195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.158205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.158580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.158807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.158818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.159195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.159583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.159603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.159939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.160190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.160200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.160552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.160892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.160902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.161237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.161555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.161566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.161923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.162270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.162280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.162733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.163083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.163093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.163405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.163773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.163784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.164116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.164469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.164479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.164821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.165167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.165177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.165422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.165733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.165743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.166063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.166417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.166428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.166692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.167060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.167071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.167428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.167744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.167754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.167977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.168334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-04-15 22:58:58.168344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.628 qpair failed and we were unable to recover it. 00:32:13.628 [2024-04-15 22:58:58.168657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.169018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.169028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.169368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.169608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.169620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.169942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.170325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.170337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.170682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.171044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.171055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.171408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.171726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.171736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.172125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.172467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.172478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.172817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.173203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.173215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.173585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.173863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.173874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.174250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.174587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.174598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.174960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.175345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.175356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.175717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.176068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.176078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.176459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.176691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.176701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.177051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.177425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.177435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.177805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.178152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.178162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.178421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.178541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.178556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.178879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.179182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.179192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.179549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.179802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.179814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.180192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.180575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.180586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.180965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.181347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.181357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.181626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.181998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.182008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.182224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.182528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.182538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.182894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.183283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.183293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.183644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.184021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.184032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.184372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.184643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.184653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.185011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.185393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.185403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.185626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.185872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.185882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.186295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.186674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.186685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.187054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.187433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.187443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.187815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.188201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.629 [2024-04-15 22:58:58.188212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.629 qpair failed and we were unable to recover it. 00:32:13.629 [2024-04-15 22:58:58.188593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.188858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.188869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.189220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.189603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.189613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.189974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.190316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.190326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.190705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.191071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.191081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.191418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.191695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.191705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.192071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.192453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.192463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.192822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.193210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.193221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.193577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.193895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.193906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.194272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.194653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.194663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.195021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.195404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.195415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.195765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.196104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.196115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.196463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.196790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.196801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.197107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.197481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.197491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.197688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.197995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.198005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.198377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.198647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.198659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.199025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.199249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.199259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.199586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.199949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.199959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.200310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.200694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.200705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.201088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.201468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.201479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.201732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.202073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.202083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.202457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.202796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.202807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.203091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.203306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.203317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.203647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.203959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.203970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.204332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.204690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.204701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.205040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.205425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.205435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.205673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.206036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.206047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.206283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.206488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.206500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.206787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.207092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.207103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.207476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.207819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-04-15 22:58:58.207832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-04-15 22:58:58.208183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.208521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.208531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.208906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.209176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.209187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.209494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.209714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.209725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.210046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.210387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.210398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.210666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.210958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.210969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.211297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.211638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.211649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.211998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.212211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.212222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.212579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.212958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.212968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.213318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.213650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.213661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.214010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.214391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.214401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.214755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.215138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.215149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.215529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.215890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.215901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.216253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.216574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.216585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.216927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.217310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.217320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.217772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.218109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.218119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.218495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.218881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.218891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.219242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.219465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.219474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.219821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.220074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.220084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.220435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.220749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.220760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.221135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.221517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.221527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.221864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.222252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.222262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.222666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.223006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.223016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.223367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.223751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.223761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.224133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.224496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.224506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.224726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.225027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.225037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.225393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.225741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.225752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.226105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.226487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.226497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.226841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.227240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.227250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.227554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.227923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.227933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-04-15 22:58:58.228302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-04-15 22:58:58.228690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.228700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.229003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.229376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.229386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.229733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.230076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.230086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.230438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.230759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.230770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.231057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.231426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.231436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.231808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.232193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.232204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.232551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.232895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.232905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.233256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.233638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.233649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.234024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.234405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.234416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.234787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.235113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.235123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.235455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.235777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.235788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.236082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.236449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.236459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.236704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.237040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.237050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.237405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.237726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.237736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.238119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.238500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.238511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.238844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.239207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.239217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.239591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.239942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.239953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.240304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.240689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.240699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.241071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.241453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.241464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.241816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.242160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.242170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.242402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.242761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.242771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.243125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.243492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.243504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.243877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.244215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.244226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.244545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.244886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.244896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.245242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.245577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-04-15 22:58:58.245588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-04-15 22:58:58.245995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.246335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.246345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.246762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.247101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.247112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.247468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.247849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.247860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.248230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.248614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.248624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.248976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.249356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.249367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.249715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.250080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.250090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.250447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.250787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.250798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.251172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.251554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.251564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.251915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.252260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.252270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.252650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.253007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.253017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.253372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.253754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.253765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.254101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.254478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.254488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.254845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.255186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.255196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.255568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.255912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.255922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.256280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.256665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.256675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.257082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.257417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.257428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.257806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.258194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.258204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.258555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.258939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.258949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.259333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.259686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.259696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.260053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.260439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.260449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.260827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.261175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.261186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.261535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.261878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.261889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.262288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.262630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.262641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.263014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.263351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.263361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.263737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.264123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.264134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.264486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.264831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.264841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.265192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.265571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.265582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.265929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.266318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.266328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.266686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.267015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-04-15 22:58:58.267025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-04-15 22:58:58.267402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.267738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.267748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.268101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.268483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.268493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.268866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.269250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.269260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.269607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.269937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.269947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.270326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.270655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.270666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.271011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.271394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.271404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.271726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.272092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.272102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.272444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.272785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.272795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.273173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.273557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.273568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.273919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.274257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.274267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.274640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.275017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.275028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.275380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.275604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.275615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.275989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.276256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.276267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.276594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.276956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.276966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.277337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.277689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.277699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.278060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.278434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.278444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.278807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.279192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.279202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.279506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.279887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.279898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.280271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.280652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.280665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.281016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.281400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.281410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.281852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.282109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.282120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.282491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.282848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.282858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.283226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.283612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.283624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.283973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.284357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.284367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.284740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.284969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.284978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.285331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.285737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.285747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.286093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.286474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.286484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.286835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.287213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.287223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.287568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.287741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.287751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-04-15 22:58:58.288103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-04-15 22:58:58.288496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.288506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.288922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.289152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.289161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.289508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.289779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.289790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.290104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.290468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.290478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.290826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.291143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.291154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.291501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.291891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.291901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.292143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.292481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.292492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.292866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.293257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.293267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.293613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.293930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.293940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.294296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.294691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.294701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.294924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.295262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.295272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.295646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.296041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.296051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.296403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.296758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.296769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.297052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.297433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.297443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.297811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.298179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.298189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.298562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.298903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.298913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.299264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.299627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.299637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.300009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.300390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.300404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.300751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.301125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.301136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.301511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.301871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.301882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.302266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.302637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.302648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.302960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.303342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.303352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.303704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.304102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.304114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.304583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.304959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.304969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.305322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.305684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.305695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.306052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.306434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.306444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.306825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.307212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.307222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.307599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.307902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.307912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.308248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.308635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.308646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-04-15 22:58:58.309017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.309402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-04-15 22:58:58.309413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.309766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.310113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.310126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.310500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.310837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.310848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.311197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.311579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.311589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.311959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.312341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.312351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.312705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.313060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.313071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.313382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.313757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.313768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.314118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.314456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.314466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.314739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.315076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.315086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.315442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.315825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.315835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.316209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.316549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.316560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.316875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.317258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.317269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.317489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.317844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.317856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.318259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.318650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.318661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.319007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.319299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.319310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.319659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.319978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.319989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.320307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.320694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.320705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.320966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.321347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.321358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.321793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.322087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.322097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.322448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.322796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.322806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.323133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.323361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.323371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.323723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.324111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.324122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.324498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.324888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.324898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.325315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.325702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.325712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.326036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.326402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.326412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-04-15 22:58:58.326767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.327106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-04-15 22:58:58.327116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.327489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.327872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.327882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.328230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.328604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.328615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.328985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.329367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.329377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.329734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.330040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.330050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.330371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.330730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.330741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.331081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.331382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.331393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.331781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.332069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.332080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.332445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.332767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.332778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.333198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.333357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.333367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.333621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.333838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.333848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.334188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.334575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.334586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.334837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.334985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.334995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.335335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.335689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.335700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.336058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.336400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.336410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.336802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.337037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.337046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.337417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.337682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.337693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.338068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.338433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.338444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.338815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.339200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.339211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.339590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.339975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.339985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.340340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.340613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.340623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.340978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.341362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.341372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.341639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.342018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.342028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.342405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.342769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.342780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.343137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.343533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.343553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.343892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.344280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.344290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.344646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.344910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.344920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.345300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.345643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.345655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.345894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.346188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.346199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-04-15 22:58:58.346577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-04-15 22:58:58.346906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.346916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.347311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.347575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.347586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.347962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.348351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.348361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.348769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.349001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.349010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.349347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.349698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.349708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.350084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.350373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.350384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.350746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.351132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.351142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.351500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.351828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.351839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1337585 Killed "${NVMF_APP[@]}" "$@" 00:32:13.638 [2024-04-15 22:58:58.352079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.352286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.352296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.352657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 22:58:58 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:32:13.638 [2024-04-15 22:58:58.353002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.353012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 22:58:58 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:13.638 [2024-04-15 22:58:58.353337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 22:58:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:13.638 22:58:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:13.638 [2024-04-15 22:58:58.353724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.353734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 22:58:58 -- common/autotest_common.sh@10 -- # set +x 00:32:13.638 [2024-04-15 22:58:58.354010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.354318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.354328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.354684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.355050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.355061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.355317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.355656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.355667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.356054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.356445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.356456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.356757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.357144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.357155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.357537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.357876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.357887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.358261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.358643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.358654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.359012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.359403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.359414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.359689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.359923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.359934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.360320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.360661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.360672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 22:58:58 -- nvmf/common.sh@469 -- # nvmfpid=1338626 00:32:13.638 22:58:58 -- nvmf/common.sh@470 -- # waitforlisten 1338626 00:32:13.638 [2024-04-15 22:58:58.361037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 22:58:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:13.638 [2024-04-15 22:58:58.361380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 22:58:58 -- common/autotest_common.sh@819 -- # '[' -z 1338626 ']' 00:32:13.638 [2024-04-15 22:58:58.361391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 22:58:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.638 [2024-04-15 22:58:58.361764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 22:58:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:13.638 22:58:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:13.638 [2024-04-15 22:58:58.362144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.362155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 22:58:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:13.638 22:58:58 -- common/autotest_common.sh@10 -- # set +x 00:32:13.638 [2024-04-15 22:58:58.362547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.362804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.362814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.363176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.363563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.363575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.363831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.364193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.364204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-04-15 22:58:58.364548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-04-15 22:58:58.364885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.364897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.365149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.365400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.365411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.365766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.366151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.366162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.366535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.366888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.366899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.367284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.367665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.367676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.368047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.368347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.368359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.368740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.369058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.369069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.369425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.369876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.369887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.370238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.370583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.370595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.370950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.371323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.371334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.371674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.371976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.371987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.372217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.372567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.372578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.372961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.373346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.373357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.373610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.373975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.373986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.374364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.374619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.374629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.374995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.375371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.375381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.375647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.375995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.376005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.376358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.376686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.376696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.377027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.377405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.377416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.377855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.378240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.378251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.378632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.379007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.379017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.379371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.379648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.379658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.380049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.380437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.380447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.380700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.381043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.381054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.381430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.381801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.381812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.382169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.382515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.382525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.382754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.383098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.383109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.383457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.383816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.383828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.384176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.384559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.384569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-04-15 22:58:58.384922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.385270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-04-15 22:58:58.385280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.385705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.386001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.386013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.386366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.386727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.386738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.387176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.387391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.387401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.387762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.388146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.388157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.388532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.388887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.388898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.389232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.389457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.389467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.389806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.390142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.390152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.390552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.390874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.390884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.391254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.391465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.391476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.391680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.392070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.392081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.392460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.392802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.392813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.393117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.393506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.393516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.393872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.394254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.394264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.394663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.395048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.395059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.395282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.395598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.395610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.395965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.396138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.396150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.396494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.396894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.396905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.397260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.397606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.397617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.397925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.398162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.398173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.398528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.398757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.398767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.399099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.399443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.399454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.399806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.400143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.400154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.400486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.400841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.400853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.401210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.401557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.401569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.401926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.402308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.402319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.402755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.403100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.403110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.403460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.403802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.403813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.404170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.404523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.404534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-04-15 22:58:58.404888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.405280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-04-15 22:58:58.405290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.405648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.406025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.406035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.406410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.406475] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:32:13.641 [2024-04-15 22:58:58.406522] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:13.641 [2024-04-15 22:58:58.406768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.406779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.407125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.407399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.407410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.407682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.408022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.408033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.408383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.408737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.408748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.409122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.412556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.412584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.413009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.413382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.413394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.413723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.413967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.413977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.414349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.414632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.414643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.415031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.415424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.415435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.415669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.416068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.416078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.416466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.416841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.416856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.417229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.417624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.417642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.418017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.418200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.418211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.418575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.418942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.418952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.419300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.419638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.419649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.419989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.420343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.420354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.420626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.421002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.421013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.421361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.421649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.421661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.422033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.422249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.422263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.422624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.423003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.423014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.423389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.423747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.423759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.424088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.424469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.424479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.424847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.425081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.425092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-04-15 22:58:58.425491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.425840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-04-15 22:58:58.425852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-04-15 22:58:58.426218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-04-15 22:58:58.426592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-04-15 22:58:58.426604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-04-15 22:58:58.426996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-04-15 22:58:58.427297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-04-15 22:58:58.427308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-04-15 22:58:58.427649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-04-15 22:58:58.428035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-04-15 22:58:58.428047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.428402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.428730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.428741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.429120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.429393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.429404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.429846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.430144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.430155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.430539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.430890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.430901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.431122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.431426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.431437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.431811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.432044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.432054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.432403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.432752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.432763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.433131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.433481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.433492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.433875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.434197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.434210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.434591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.434963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.434974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.435050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.435402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.435413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.435729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.436059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.436070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.436410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.436794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.436805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.437160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.437549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.437560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.437945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.438292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.438303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.438722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.439099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.439109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.439356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.439679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.439690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.440044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.440389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.440399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.440752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.441098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.441108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.441472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.441801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.441811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.911 [2024-04-15 22:58:58.442166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.442515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.442525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.442904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.443245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.443255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.443602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.443962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.443972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.444196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.444504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.444514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.444876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.445216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.445226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.445587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.445916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.445926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.446242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.446615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.911 [2024-04-15 22:58:58.446626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.911 qpair failed and we were unable to recover it. 00:32:13.911 [2024-04-15 22:58:58.446999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.447270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.447281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.447663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.448052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.448063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.448381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.448726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.448736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.449112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.449446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.449456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.449802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.450138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.450149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.450530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.450891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.450902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.451199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.451568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.451580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.451950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.452289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.452306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.452650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.452990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.453001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.453389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.453765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.453775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.454132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.454515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.454526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.454894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.455276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.455286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.455592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.455960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.455970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.456342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.456684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.456695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.456938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.457323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.457333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.457487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.457820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.457830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.458182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.458413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.458424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.458782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.459168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.459179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.459555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.459880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.459891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.460226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.460608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.460619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.460994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.461379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.461390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.461768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.462110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.462120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.462471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.462697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.462708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.463080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.463291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.463303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.463655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.463885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.463895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.464277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.464631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.464641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.465020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.465411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.465422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.465803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.466188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.466199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.466553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.466885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.466895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.467276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.467341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.467351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.467671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.467828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.467839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.468161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.468471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.468481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.468798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.469159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.469170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.469560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.469887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.469897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.470292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.470629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.470640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.470854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.471155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.471166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.471395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.471643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.471653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.472068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.472453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.472464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.472819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.473148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.473159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.473499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.473876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.473887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.474242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.474631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.474641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.475053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.475390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.475401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.475584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.475906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.475916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.476288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.476555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.476567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.476972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.477319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.477329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.477711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.478077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.478087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.478457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.478843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.478854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.479229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.479606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.479617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.479970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.480058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.480069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.480415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.480736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.480747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.481102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.481485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.481495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.481848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.482233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.482243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.482597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.482999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.483010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.483357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.483691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.483702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.912 qpair failed and we were unable to recover it. 00:32:13.912 [2024-04-15 22:58:58.483979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.484351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.912 [2024-04-15 22:58:58.484361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.484577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.484893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.484903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.485261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.485645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.485655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.486007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.486396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.486407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.486755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.487110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.487122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.487489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.487865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.487876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.488062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.488436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.488447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.488802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.489145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.489156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.489378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.489739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.489750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.490070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.490452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.490463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.490835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.491173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.491183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.491500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.491864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.491875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.492093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.492323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.492333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.492585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.492905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.492916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.493275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.493614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.493625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.494003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.494386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.494396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.494748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.495094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.495105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.495400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:13.913 [2024-04-15 22:58:58.495479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.495826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.495838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.496187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.496418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.496429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.496797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.497179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.497189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.497561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.497797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.497808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.498175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.498558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.498569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.498931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.499317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.499327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.499693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.500080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.500091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.500438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.500708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.500719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.501100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.501487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.501497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.501848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.502190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.502201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.502550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.502879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.502890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.503229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.503568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.503580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.503931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.504316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.504326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.504676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.505065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.505077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.505412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.505628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.505640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.505884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.506273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.506284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.506622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.506967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.506978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.507333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.507739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.507750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.507974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.508318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.508329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.508748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.509090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.509100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.509481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.509829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.509839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.510191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.510554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.510564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.510919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.511282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.511292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.511681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.512066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.512077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.512449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.512800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.512811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.513033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.513401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.513411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.513771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.514142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.514152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.514497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.514878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.514888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.515264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.515653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.515664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.516026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.516416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.516427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.913 qpair failed and we were unable to recover it. 00:32:13.913 [2024-04-15 22:58:58.516776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.913 [2024-04-15 22:58:58.517164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.517175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.517519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.517904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.517915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.518288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.518663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.518674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.519041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.519383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.519394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.519756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.520098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.520109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.520512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.520817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.520828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.521167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.521554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.521564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.521898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.522100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.522111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.522494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.522711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.522724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.523082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.523438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.523449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.523779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.524164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.524175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.524526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.524793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.524805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.525186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.525401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.525413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.525764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.526094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.526105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.526394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.526758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.526769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.527123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.527510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.527521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.527881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.528268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.528279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.528631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.528994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.529006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.529379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.529766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.529780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.530137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.530520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.530531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.530886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.531273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.531284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.531638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.531934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.531945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.532319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.532667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.532678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.533036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.533413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.533424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.533787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.534166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.534176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.534533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.534841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.534852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.535236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.535506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.535517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.535871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.536101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.536113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.536487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.536845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.536856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.537217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.537559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.537570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.537884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.538138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.538149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.538388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.538590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.538600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.538950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.539340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.539350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.539703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.540042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.540052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.540430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.540640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.540651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.541047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.541440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.541451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.541807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.542163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.542173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.542523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.542889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.542900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.543277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.543658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.543669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.544031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.544264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.544273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.544616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.545006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.545017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.545369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.545755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.545766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.546069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.546444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.546454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.546760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.547154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.547164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.547521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.547844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.547855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.548072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.548440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.548450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.548801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.549140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.549151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.549484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.549866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.549876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.550248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.550590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.550600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.550954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.551177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.551188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.551559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.551891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.551901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.552334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.552671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.552681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.914 qpair failed and we were unable to recover it. 00:32:13.914 [2024-04-15 22:58:58.552915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.914 [2024-04-15 22:58:58.553301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.553312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.553671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.554041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.554051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.554357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.554733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.554743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.555108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.555494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.555504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.555868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.556256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.556267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.556615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.556992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.557002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.557220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.557559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.557569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.557777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.558128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.558143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.558521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.558851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.558846] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:13.915 [2024-04-15 22:58:58.558862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.558971] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:13.915 [2024-04-15 22:58:58.558981] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:13.915 [2024-04-15 22:58:58.558989] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:13.915 [2024-04-15 22:58:58.559216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.559136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:32:13.915 [2024-04-15 22:58:58.559305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:32:13.915 [2024-04-15 22:58:58.559426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:32:13.915 [2024-04-15 22:58:58.559517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.559528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.559427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:32:13.915 [2024-04-15 22:58:58.559910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.560141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.560152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.560465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.560703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.560715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.561120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.561510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.561520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.561866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.562207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.562218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.562474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.562861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.562871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.563226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.563614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.563627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.563876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.564164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.564174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.564523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.564906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.564917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.565195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.565440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.565450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.565807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.566203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.566214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.566598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.566948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.566958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.567316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.567479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.567489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.567837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.568231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.568241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.568598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.568842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.568851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.569239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.569463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.569473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.569838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.570184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.570195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.570583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.570976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.570987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.571206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.571548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.571558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.571794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.572180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.572190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.572549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.572662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.572672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.573003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.573394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.573405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.573839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.574183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.574193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.574573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.574922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.574933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.575154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.575383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.575394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.575554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.575941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.575952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.576311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.576435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.576444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.576811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.577210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.577221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.577576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.577813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.577823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.578047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.578293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.578304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.578660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.578894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.578905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.579114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.579448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.579458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.579806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.580198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.580210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.580551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.580915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.580926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.581280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.581626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.581638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.581993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.582250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.582261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.582643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.582860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.582871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.583236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.583631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.583642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.584017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.584245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.584255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.584563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.584775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.584785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.585002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.585389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.585400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.585790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.586122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.586133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.915 [2024-04-15 22:58:58.586374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.586721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.915 [2024-04-15 22:58:58.586732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.915 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.587107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.587453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.587464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.587821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.588033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.588043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.588349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.588549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.588560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.588922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.589148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.589158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.589517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.589748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.589758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.590019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.590358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.590368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.590431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.590796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.590807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.591178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.591526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.591537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.591921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.592262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.592274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.592627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.593008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.593019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.593378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.593731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.593742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.593968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.594139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.594149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.594484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.594869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.594880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.595234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.595581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.595592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.595934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.596321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.596335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.596646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.596868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.596878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.597239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.597599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.597610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.597965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.598333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.598344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.598686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.598923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.598934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.599290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.599642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.599653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.600033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.600196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.600206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.600397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.600668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.600679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.601014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.601358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.601370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.601699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.602066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.602076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.602464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.602674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.602684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.602946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.603287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.603297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.603674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.603987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.603998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.604219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.604611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.604622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.604977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.605372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.605383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.605742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.606102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.606113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.606313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.606685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.606695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.606918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.607275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.607285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.607663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.608023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.608033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.608229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.608608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.608618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.608808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.609176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.609186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.609547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.609947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.609957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.610177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.610418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.610428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.610488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.610687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.610698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.610902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.611263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.611274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.611643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.611866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.611876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.612252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.612620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.612631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.612989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.613230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.613241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.613635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.613868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.613880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.614233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.614575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.614586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.614978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.615321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.615331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.615736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.616116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.616127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.616506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.616865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.616875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.916 qpair failed and we were unable to recover it. 00:32:13.916 [2024-04-15 22:58:58.617141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.916 [2024-04-15 22:58:58.617338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.617348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.617690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.618078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.618089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.618308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.618656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.618667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.619026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.619325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.619335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.619619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.619962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.619972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.620349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.620582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.620592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.620941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.621332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.621343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.621704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.622063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.622073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.622420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.622769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.622781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.623160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.623505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.623515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.623865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.624210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.624220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.624602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.624941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.624951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.625230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.625463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.625473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.625860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.626247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.626257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.626474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.626858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.626869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.627092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.627326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.627336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.627727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.628112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.628123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.628344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.628562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.628574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.628907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.629207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.629218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.629602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.629833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.629843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.630204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.630263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.630272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.630619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.630829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.630840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.631147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.631460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.631470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.631876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.632172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.632183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.632537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.632783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.632794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.633185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.633414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.633433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.633812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.634199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.634210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.634585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.634773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.634782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.635105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.635321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.635330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.635697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.636066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.636077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.636516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.636823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.636835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.637216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.637557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.637568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.637927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.638323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.638333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.638658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.638723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.638734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.639093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.639323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.639334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.639715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.640013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.640023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.640200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.640578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.640589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.640960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.641309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.641319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.641680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.642022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.642033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.642413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.642481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.642489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.642830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.642888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.642897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.643239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.643470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.643480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.643843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.644193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.644203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.644567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.644801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.644810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.645208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.645597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.645608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.645999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.646390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.646400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.646708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.647097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.647107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.647289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.647606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.647617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.648047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.648388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.648398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.648770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.649118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.649128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.649485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.649872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.649883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.650178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.650538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.650553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.650913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.651181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.651192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.651508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.651861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.651872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.652229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.652444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.652454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.917 qpair failed and we were unable to recover it. 00:32:13.917 [2024-04-15 22:58:58.652800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.653147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.917 [2024-04-15 22:58:58.653158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.653556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.653909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.653920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.654305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.654467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.654477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.654681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.654876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.654888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.655253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.655593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.655608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.655829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.656216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.656226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.656607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.656999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.657009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.657248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.657559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.657570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.657950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.658294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.658304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.658661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.659030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.659040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.659423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.659651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.659662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.660060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.660448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.660458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.660611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.660800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.660810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.661176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.661564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.661575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.661942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.662332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.662343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.662699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.662759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.662768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.663119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.663509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.663519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.663866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.664209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.664219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.664442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.664814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.664825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.665183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.665353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.665364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.665722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.666081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.666092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.666446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.666803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.666814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.667035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.667346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.667356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.667656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.667885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.667895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.668271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.668659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.668671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.668982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.669348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.669359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.669740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.669954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.669964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.670294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.670639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.670649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.671023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.671226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.671235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.671523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.671899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.671910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.672289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.672629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.672640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.672895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.673281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.673290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.673674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.674061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.674071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.674429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.674792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.674803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.675180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.675566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.675577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.675782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.676112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.676123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.676467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.676823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.676833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.677192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.677582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.677592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.677981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.678324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.678334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.678697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.678919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.678928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.679160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.679213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.679221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.679513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.679922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.679940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.680275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.680620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.680630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.680989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.681377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.681388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.681764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.682112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.682123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.682485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.682867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.682878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.683261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.683603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.683614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.683987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.684191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.684201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.684423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.684778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.684789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.685092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.685395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.918 [2024-04-15 22:58:58.685405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.918 qpair failed and we were unable to recover it. 00:32:13.918 [2024-04-15 22:58:58.685730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.686075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.686085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.686303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.686531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.686541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.686913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.687259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.687269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.687631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.687955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.687965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.688254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.688617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.688627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.688868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.689123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.689136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.689515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.689820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.689831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.690187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.690575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.690586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.690781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.691097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.691107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.691266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.691622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.691633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.692007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.692239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.692251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.692458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.692786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.692797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.693033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.693392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.693403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.693640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.693858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.693869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.694036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.694414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.694424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.694806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.695152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.695162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.695393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.695683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.695693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.696055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.696287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.696297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.696678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.696849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.696860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.697184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.697556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.697567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.697924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.698301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.698312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.698667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.699051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.699062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.699431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.699803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.699814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.700025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.700313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.700323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.700659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.701037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.701047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.701414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.701759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.701770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.702156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.702512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.702522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.702876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.703271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.703281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.703534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.703902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.703913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.704135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.704534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.704550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.704892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.705240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.705250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.705611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.705908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.705918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.706162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.706351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.706361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.706736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.707125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.707135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.707517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.707755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.707765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.708121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.708460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.708471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.708829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.709048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.709059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.709408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.709640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.709650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:13.919 [2024-04-15 22:58:58.710035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.710422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.919 [2024-04-15 22:58:58.710432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:13.919 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.710653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.711016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.711028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.711409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.711760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.711771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.712128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.712404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.712416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.712780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.713011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.713021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.713378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.713758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.713769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.713972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.714026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.714036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.714348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.714738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.714749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.715123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.715468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.715480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.715847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.716082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.716093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.716219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.716424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.716435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.716784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.717170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.717180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.717401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.717788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.717800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.718155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.718384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.718393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.718769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.719039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.719050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.719420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.719799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.719809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.720031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.720384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.720394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.720750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.721098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.721108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.721315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.721524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.721534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.721685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.721941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.721951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.722320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.722714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.722725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.723161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.723504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.723515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.723723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.724056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.724067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.724426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.724795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.724806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.725182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.725396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.725406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.190 qpair failed and we were unable to recover it. 00:32:14.190 [2024-04-15 22:58:58.725771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.726115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.190 [2024-04-15 22:58:58.726126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.726321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.726695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.726705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.727061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.727402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.727413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.727634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.727851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.727861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.728250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.728636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.728646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.728998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.729342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.729352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.729700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.729997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.730008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.730186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.730488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.730498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.730699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.731063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.731074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.731338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.731720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.731731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.731925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.732288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.732298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.732676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.733037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.733047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.733410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.733800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.733810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.734190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.734534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.734549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.734875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.735081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.735091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.735457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.735809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.735820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.736174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.736527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.736537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.736750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.737129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.737139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.737448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.737830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.737840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.738254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.738559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.738571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.738792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.739146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.739157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.739226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.739559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.739571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.739960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.740349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.740359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.740707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.741099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.741109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.741472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.741787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.741798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.742157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.742555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.742566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.742887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.743276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.743286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.743662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.744069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.744079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.744434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.744806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.744816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.191 qpair failed and we were unable to recover it. 00:32:14.191 [2024-04-15 22:58:58.745207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.745421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.191 [2024-04-15 22:58:58.745432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.745783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.746109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.746119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.746474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.746837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.746848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.747210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.747446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.747456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.747825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.748174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.748185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.748547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.748908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.748921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.749297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.749686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.749697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.749898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.750071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.750082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.750361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.750706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.750716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.750772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.751011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.751022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.751400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.751793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.751803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.752160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.752552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.752562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.752737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.753119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.753129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.753489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.753722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.753732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.754113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.754461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.754471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.754851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.755217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.755228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.755493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.755877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.755887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.756105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.756449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.756460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.756684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.757031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.757042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.757233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.757450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.757461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.757829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.758224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.758235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.758604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.758820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.758830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.759168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.759522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.759532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.759606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.759929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.759939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.760125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.760455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.760465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.760825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.761213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.761223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.761569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.761804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.761814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.762036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.762239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.762249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.762617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.762963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.762973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.192 qpair failed and we were unable to recover it. 00:32:14.192 [2024-04-15 22:58:58.763334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.192 [2024-04-15 22:58:58.763727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.763738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.763933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.764141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.764151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.764512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.764864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.764875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.765254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.765642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.765652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.766019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.766274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.766285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.766664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.766894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.766904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.767276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.767664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.767674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.768056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.768446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.768456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.768674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.769072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.769082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.769450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.769656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.769666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.769974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.770343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.770353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.770733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.770968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.770978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.771335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.771702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.771713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.772064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.772451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.772462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.772819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.772988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.772998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.773200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.773589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.773599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.773821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.774145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.774155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.774378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.774688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.774698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.775081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.775426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.775436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.775802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.776032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.776042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.776396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.776784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.776796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.777194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.777587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.777597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.777947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.778153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.778163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.778535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.778835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.778845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.779198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.779539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.779554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.779916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.780133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.780144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.780329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.780720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.780731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.781111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.781454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.781468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.781687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.781985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.193 [2024-04-15 22:58:58.781995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.193 qpair failed and we were unable to recover it. 00:32:14.193 [2024-04-15 22:58:58.782329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.782551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.782562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.782898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.783202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.783213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.783527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.783896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.783906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.784252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.784597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.784607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.784988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.785202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.785212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.785378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.785635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.785645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.785988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.786339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.786350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.786743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.786948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.786957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.787313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.787656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.787666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.787998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.788231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.788240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.788620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.789011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.789021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.789227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.789509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.789519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.789853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.790079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.790089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.790444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.790798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.790809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.791031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.791358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.791369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.791666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.791856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.791866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.792161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.792530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.792541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.792813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.793203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.793213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.793591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.793954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.793964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.794120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.794464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.794474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.794821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.795205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.795216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.795573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.795869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.795880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.796068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.796357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.796368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.194 [2024-04-15 22:58:58.796729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.797071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.194 [2024-04-15 22:58:58.797081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.194 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.797495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.797805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.797816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.798021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.798380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.798390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.798768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.799159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.799169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.799532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.799775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.799786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.800015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.800360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.800371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.800727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.801119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.801129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.801444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.801806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.801816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.802009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.802336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.802345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.802695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.803040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.803050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.803362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.803714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.803724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.804104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.804493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.804503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.804853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.805226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.805237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.805615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.806005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.806016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.806337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.806552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.806562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.806898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.807106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.807116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.807450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.807850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.807862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.808203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.808550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.808560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.808912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.809128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.809137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.809499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.809845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.809858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.810213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.810602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.810613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.810989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.811222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.811231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.811452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.811722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.811732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.812109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.812500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.812510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.812864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.813233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.813243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.813574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.813762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.813772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.814093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.814484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.814494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.814558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.814762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.814772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.815210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.815549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.815560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.815918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.816284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.195 [2024-04-15 22:58:58.816295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.195 qpair failed and we were unable to recover it. 00:32:14.195 [2024-04-15 22:58:58.816645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.817017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.817028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.817407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.817789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.817800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.818165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.818564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.818574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.818953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.819300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.819310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.819615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.819977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.819987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.820209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.820557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.820568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.820935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.821325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.821335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.821546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.821725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.821736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.822103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.822317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.822327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.822670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.823057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.823067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.823422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.823648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.823658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.824057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.824404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.824414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.824805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.825034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.825044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.825295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.825520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.825530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.825853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.826248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.826259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.826649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.827016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.827026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.827246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.827633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.827643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.827836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.828019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.828028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.828256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.828495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.828505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.828881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.829214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.829225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.829411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.829763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.829774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.830003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.830399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.830410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.830761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.831143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.831154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.831357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.831739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.831749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.832062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.832421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.832431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.832806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.833153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.833163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.833389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.833738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.833749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.834065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.834444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.834455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.834828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.835058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.835068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.196 qpair failed and we were unable to recover it. 00:32:14.196 [2024-04-15 22:58:58.835455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.196 [2024-04-15 22:58:58.835782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.835793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.836105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.836463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.836473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.836644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.836983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.836993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.837350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.837741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.837752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.838063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.838444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.838454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.838851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.839222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.839232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.839608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.839999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.840009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.840364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.840755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.840766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.841178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.841415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.841430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.841651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.842011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.842022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.842092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.842400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.842410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.842767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.843154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.843165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.843548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.843735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.843745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.844072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.844416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.844426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.844768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.845114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.845124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.845477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.845786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.845796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.846172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.846514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.846524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.846901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.847295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.847305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.847655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.848011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.848022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.848243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.848447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.848458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.848653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.848983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.848994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.849216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.849448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.849459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.849819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.850216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.850227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.850589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.850875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.850886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.851224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.851430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.851440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.851786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.852173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.852184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.852564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.852920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.852930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.853292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.853454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.853464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.853800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.853961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.853971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.197 qpair failed and we were unable to recover it. 00:32:14.197 [2024-04-15 22:58:58.854316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.854694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.197 [2024-04-15 22:58:58.854705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.855116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.855460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.855470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.855729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.856118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.856128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.856511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.856576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.856586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.856913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.857261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.857271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.857490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.857853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.857863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.858217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.858446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.858455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.858792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.859021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.859031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.859397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.859788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.859799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.860022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.860369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.860378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.860737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.861085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.861095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.861477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.861798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.861808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.861993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.862199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.862210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.862418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.862756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.862768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.863125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.863467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.863477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.863826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.864168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.864179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.864537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.864888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.864898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.865241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.865450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.865460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.865787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.866135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.866146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.866526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.866923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.866934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.867292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.867640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.867650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.867861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.868070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.868080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.868446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.868808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.868818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.869017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.869429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.869438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.869796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.870026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.870036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.870415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.870620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.870631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.870994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.871256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.871266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.871660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.872005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.872016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.872372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.872758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.872769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.198 qpair failed and we were unable to recover it. 00:32:14.198 [2024-04-15 22:58:58.873032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.198 [2024-04-15 22:58:58.873338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.873348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.873572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.873961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.873973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.874352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.874691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.874703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.875051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.875306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.875316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.875510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.875849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.875860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.876261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.876602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.876613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.876841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.877184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.877194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.877551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.877949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.877959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.878307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.878665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.878676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.878909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.879274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.879284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.879667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.880009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.880019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.880384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.880733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.880745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.881123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.881355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.881365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.881728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.882074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.882084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.882471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.882703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.882713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.883090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.883487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.883497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.883873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.884264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.884275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.884493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.884875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.884886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.885265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.885614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.885625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.885826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.885889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.885899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.886262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.886655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.886666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.886862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.887196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.887206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.887605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.887814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.887824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.888102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.888413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.888425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.888780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.889175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.889185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.199 qpair failed and we were unable to recover it. 00:32:14.199 [2024-04-15 22:58:58.889538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.889814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.199 [2024-04-15 22:58:58.889824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.890210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.890598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.890609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.890870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.891217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.891228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.891604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.892041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.892051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.892404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.892750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.892762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.893149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.893494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.893504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.893857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.894203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.894214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.894571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.894791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.894801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.895173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.895562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.895573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.895936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.896256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.896266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.896474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.896821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.896832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.897212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.897559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.897569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.897737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.898079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.898090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.898482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.898712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.898723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.899077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.899462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.899472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.899691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.899896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.899906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.900229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.900460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.900471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.900839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.901227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.901240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.901599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.901762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.901772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.901967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.902166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.902176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.902541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.902918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.902928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.903306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.903692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.903703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.904065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.904453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.904464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.904668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.904867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.904878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.905256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.905652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.905662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.905890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.906247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.906257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.906575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.906929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.906939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.907313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.907708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.907719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.908094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.908300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.908311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.200 [2024-04-15 22:58:58.908482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.908745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.200 [2024-04-15 22:58:58.908756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.200 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.909119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.909460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.909471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.909851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.910241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.910252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.910602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.910905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.910915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.911119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.911508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.911518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.911866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.912253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.912263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.912648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.912937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.912947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.913317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.913681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.913691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.914078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.914399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.914409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.914763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.915110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.915120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.915314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.915660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.915671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.916030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.916247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.916256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.916586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.916986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.916996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.917353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.917709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.917720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.918035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.918228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.918238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.918447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.918816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.918829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.919210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.919601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.919612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.919776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.919974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.919984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.920293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.920516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.920526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.920730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.921080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.921090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.921473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.921827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.921838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.922193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.922533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.922547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.922789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.923023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.923032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.923287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.923584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.923595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.923934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.924155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.924165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.924366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.924705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.924716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.925100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.925490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.925501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.925708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.926075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.926085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.926468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.926823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.926834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.927239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.927588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.927600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.201 qpair failed and we were unable to recover it. 00:32:14.201 [2024-04-15 22:58:58.927906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.201 [2024-04-15 22:58:58.928293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.928304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.928509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.928881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.928891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.929122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.929513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.929523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.929877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.930152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.930163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.930384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.930698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.930709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.930763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.931104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.931114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.931496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.931579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.931588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.931955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.932163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.932173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.932550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.932892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.932903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.933253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.933643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.933656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.934039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.934213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.934224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.934546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.934888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.934899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.935280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.935641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.935652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.936007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.936346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.936357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.936709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.937075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.937085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.937444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.937631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.937641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.937970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.938361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.938372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.938638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.938987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.938998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.939223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.939613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.939623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.939964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.940336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.940346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.940703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.941068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.941078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.941296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.941679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.941690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.941904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.942115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.942125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.942480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.942832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.942843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.943067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.943365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.943376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.943567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.943961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.943972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.944194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.944552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.944563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.944925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.945315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.945325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.945583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.945791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.945801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.202 qpair failed and we were unable to recover it. 00:32:14.202 [2024-04-15 22:58:58.946167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.946379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.202 [2024-04-15 22:58:58.946388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.946742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.947141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.947151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.947500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.947885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.947896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.948257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.948529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.948540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.948903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.949264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.949274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.949482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.949854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.949865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.950216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.950432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.950443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.950645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.950704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.950714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.951063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.951450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.951461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.951806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.952082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.952093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.952352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.952520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.952531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.952892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.953295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.953307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.953662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.953849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.953859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.954233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.954447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.954458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.954849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.955248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.955258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.955678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.955896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.955906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.956267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.956620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.956631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.956836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.957026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.957038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.957394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.957748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.957759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.957937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.958175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.958186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.958554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.958945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.958956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.959336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.959574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.959586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.959843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.960235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.960245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.960624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.960843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.960853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.961209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.961549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.961560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.203 qpair failed and we were unable to recover it. 00:32:14.203 [2024-04-15 22:58:58.961760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.203 [2024-04-15 22:58:58.962113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.962123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.962552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.962788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.962800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.963176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.963522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.963533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.963738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.963958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.963968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.964230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.964578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.964588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.964946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.965272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.965282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.965468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.965866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.965880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.966237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.966579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.966589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.966943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.967175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.967185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.967540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.967940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.967950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.968308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.968632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.968643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.969010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.969370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.969380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.969647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.970038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.970049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.970406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.970759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.970770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.971159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.971323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.971334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.971548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.971940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.971951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.972214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.972446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.972457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.972643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.972933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.972943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.973266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.973657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.973668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.974034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.974396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.974407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.974719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.975107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.975117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.975474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.975670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.975680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.976062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.976444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.976455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.976524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.976693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.976703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.977035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.977338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.977348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.977713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.977904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.977915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.978277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.978624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.978635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.979017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.979362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.979372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.204 [2024-04-15 22:58:58.979723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.979951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.204 [2024-04-15 22:58:58.979961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.204 qpair failed and we were unable to recover it. 00:32:14.205 [2024-04-15 22:58:58.980313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.980697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.980709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.205 qpair failed and we were unable to recover it. 00:32:14.205 [2024-04-15 22:58:58.980935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.981228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.981239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.205 qpair failed and we were unable to recover it. 00:32:14.205 [2024-04-15 22:58:58.981421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.981677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.981688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.205 qpair failed and we were unable to recover it. 00:32:14.205 [2024-04-15 22:58:58.982055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.982440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.982452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.205 qpair failed and we were unable to recover it. 00:32:14.205 [2024-04-15 22:58:58.982828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.983220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.983231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.205 qpair failed and we were unable to recover it. 00:32:14.205 [2024-04-15 22:58:58.983606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.983995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.984006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.205 qpair failed and we were unable to recover it. 00:32:14.205 [2024-04-15 22:58:58.984065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.984263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.984275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.205 qpair failed and we were unable to recover it. 00:32:14.205 [2024-04-15 22:58:58.984661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.984865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.984876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.205 qpair failed and we were unable to recover it. 00:32:14.205 [2024-04-15 22:58:58.985101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.985454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.985465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.205 qpair failed and we were unable to recover it. 00:32:14.205 [2024-04-15 22:58:58.985825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.986215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.986225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.205 qpair failed and we were unable to recover it. 00:32:14.205 [2024-04-15 22:58:58.986578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.986812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.986823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.205 qpair failed and we were unable to recover it. 00:32:14.205 [2024-04-15 22:58:58.987205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.987551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.987562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.205 qpair failed and we were unable to recover it. 00:32:14.205 [2024-04-15 22:58:58.987919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.988266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.988277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.205 qpair failed and we were unable to recover it. 00:32:14.205 [2024-04-15 22:58:58.988479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.988847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.205 [2024-04-15 22:58:58.988858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.205 qpair failed and we were unable to recover it. 00:32:14.205 [2024-04-15 22:58:58.989206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.989421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.989434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:58.989688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.990076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.990087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:58.990451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.990761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.990772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:58.991000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.991231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.991241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:58.991460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.991581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.991600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:58.991947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.992287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.992297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:58.992656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.993050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.993060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:58.993445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.993858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.993869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:58.994125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.994512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.994523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:58.994582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.994916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.994927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:58.995245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.995631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.995643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:58.996017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.996356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.996366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:58.996589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.996922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.996932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:58.997130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.997193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.997204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:58.997524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.997755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.997765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:58.998145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.998492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.998503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:58.998910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.999254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.999265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:58.999641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.999959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:58.999970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:59.000342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:59.000734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:59.000745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:59.000975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:59.001202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:59.001212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:59.001559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:59.001917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:59.001928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:59.002310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:59.002658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:59.002669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:59.003112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:59.003504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:59.003515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:59.003888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:59.004229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:59.004240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:59.004486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:59.004576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:59.004586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:59.004852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:59.005238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:59.005248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:59.005604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:59.005946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:59.005957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.477 qpair failed and we were unable to recover it. 00:32:14.477 [2024-04-15 22:58:59.006342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.477 [2024-04-15 22:58:59.006506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.006516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.006698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.006990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.007002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.007372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.007757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.007768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.008122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.008516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.008526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.008936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.009300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.009311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.009538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.009897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.009909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.010239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.010618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.010629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.011004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.011366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.011376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.011597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.011939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.011950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.012309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.012619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.012630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.012985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.013378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.013389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.013592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.013811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.013821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.014014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.014388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.014398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.014756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.015077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.015089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.015345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.015705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.015716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.015960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.016234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.016245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.016442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.016774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.016785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.017143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.017484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.017495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.017854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.018199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.018211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.018282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.018626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.018637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.018825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.019080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.019091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.019446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.019802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.019813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.020076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.020463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.020473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.020817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.021050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.021061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.021205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.021419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.021430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.021653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.021887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.021898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.022235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.022550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.022561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.022749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.022971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.022982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.478 [2024-04-15 22:58:59.023366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.023727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.478 [2024-04-15 22:58:59.023741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.478 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.024102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.024494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.024504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.024729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.025124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.025134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.025356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.025750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.025761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.026113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.026344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.026355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.026561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.026874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.026885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.027269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.027621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.027632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.027984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.028334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.028344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.028578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.028955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.028965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.029283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.029611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.029621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.029984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.030315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.030325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.030707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.031025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.031036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.031378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.031607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.031618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.031964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.032321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.032332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.032694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.032870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.032881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.033239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.033513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.033524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.033891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.034244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.034254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.034512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.034885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.034895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.035225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.035559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.035571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.035909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.036145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.036156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.036377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.036751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.036761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.037040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.037389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.037399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.037625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.037955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.037965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.038309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.038547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.038557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.038904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.039137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.039148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.039504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.039835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.039846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.039901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.040092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.040102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.040465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.040823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.040834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.041178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.041559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.041570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.041884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.042135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.479 [2024-04-15 22:58:59.042145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.479 qpair failed and we were unable to recover it. 00:32:14.479 [2024-04-15 22:58:59.042498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.042709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.042719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.043109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.043328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.043339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.043704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.044099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.044110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.044465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.044833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.044844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.045016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.045246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.045257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.045655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.045900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.045911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.046288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.046696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.046707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.046894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.046962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.046973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.047346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.047555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.047566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.047940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.048201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.048212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.048482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.048721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.048733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.048900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.049266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.049277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.049494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.049845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.049855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.050077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.050310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.050321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.050705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.051080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.051091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.051449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.051636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.051647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.051838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.052019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.052029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.052392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.052767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.052777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.053135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.053438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.053449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.053665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.054067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.054077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.054467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.054531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.054539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.054755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.055127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.055141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.055503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.055882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.055894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.056093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.056384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.056396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.056606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.056934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.056944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.057170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.057467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.057478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.057706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.058091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.058101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.058460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.058655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.480 [2024-04-15 22:58:59.058666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.480 qpair failed and we were unable to recover it. 00:32:14.480 [2024-04-15 22:58:59.058868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.059228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.059240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.059310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.059517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.059528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.059760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.060115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.060126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.060493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.060688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.060699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.060887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.061226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.061237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.061632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.062073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.062084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.062309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.062548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.062558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.062740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.063100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.063109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.063334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.063710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.063721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.064097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.064497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.064507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.064735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.064934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.064944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.065294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.065688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.065698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.065845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.066062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.066074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.066467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.066705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.066716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.067105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.067450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.067461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.067812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.068213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.068225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.068444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.068820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.068831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.069220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.069574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.069584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.069983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.070341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.070351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.070560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.070900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.070910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.071125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.071467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.071477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.071845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.072244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.481 [2024-04-15 22:58:59.072254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.481 qpair failed and we were unable to recover it. 00:32:14.481 [2024-04-15 22:58:59.072594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.072991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.073001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.073214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.073532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.073546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.073743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.073963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.073973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.074314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.074666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.074677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.074879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.075071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.075082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.075470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.075714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.075725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.076122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.076480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.076491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.076888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.077276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.077286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.077671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.077867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.077877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.078244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.078476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.078488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.078596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.078996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.079007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.079397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.079809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.079819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.080208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.080616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.080629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.080883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.081242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.081252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.081651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.082043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.082053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.082452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.082673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.082684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.083085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.083144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.083153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.083362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.083754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.083765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.084121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.084522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.084532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.084980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.085285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.085297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.085607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.085948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.085957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.086333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.086711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.086721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.087158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.087512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.087523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.087768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.088155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.088165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.088380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.088575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.088587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.088925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.089119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.089129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.089357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.089586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.089597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.089800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.090183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.090193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.482 [2024-04-15 22:58:59.090580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.090932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.482 [2024-04-15 22:58:59.090942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.482 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.091299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.091528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.091538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.091939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.092005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.092014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.092330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.092688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.092698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.093094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.093291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.093302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.093623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.093836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.093845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.094225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.094458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.094470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.094832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.095231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.095241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.095598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.095662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.095671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.096027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.096264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.096274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.096665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.097063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.097074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.097286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.097638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.097649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.098028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.098192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.098202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.098586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.098653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.098662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.098869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.099204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.099215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.099581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.099979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.099989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.100379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.100769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.100780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.100987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.101324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.101335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.101729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.102080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.102090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.102151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.102340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.102350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.102730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.103091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.103101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.103461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.103525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.103535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.103784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.104090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.104101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.104287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.104635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.104648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.104987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.105341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.105352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.105569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.105755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.105767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.105960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.106300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.106310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.106670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.107057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.107068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.107453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.107853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.107863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.483 qpair failed and we were unable to recover it. 00:32:14.483 [2024-04-15 22:58:59.108062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.108442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.483 [2024-04-15 22:58:59.108452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.108675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.109053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.109063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.109419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.109759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.109770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.110117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.110511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.110521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.110868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.111213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.111224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.111420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.111803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.111814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.112173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.112538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.112570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.112962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.113308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.113319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.113520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.113898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.113910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.114248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.114318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.114327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.114609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.114868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.114878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.115258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.115513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.115524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.115698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.115760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.115769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.116013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.116361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.116371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.116733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.117082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.117092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.117427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.117790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.117801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.118009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.118301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.118311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.118680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.118897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.118907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.119115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.119479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.119490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.119803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.120202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.120214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.120412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.120667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.120678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.121063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.121280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.121290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.121655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.122016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.122026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.122408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.122807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.122818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.123187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.123275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.123283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.123470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.123526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.123535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.123906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.124285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.124296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.124726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.124939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.124949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.125269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.125454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.125465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.125664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.126043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.484 [2024-04-15 22:58:59.126054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.484 qpair failed and we were unable to recover it. 00:32:14.484 [2024-04-15 22:58:59.126274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.126659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.126671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.126890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.127239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.127250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.127572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.127956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.127967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.128385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.128728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.128740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.128951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.129145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.129155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.129214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.129533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.129548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.129759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.130134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.130144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.130372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.130598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.130610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.130797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.131139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.131150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.131501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.131886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.131897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.132247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.132589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.132600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.132986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.133378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.133388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.133749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.134095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.134106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.134445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.134815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.134826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.135191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.135423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.135432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.135648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.135986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.135996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.136357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.136640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.136650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.136868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.137076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.137086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.137439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.137768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.137779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.138161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.138551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.138563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.138932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.139325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.139336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.139721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.139959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.139970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.140338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.140575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.140585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.140954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.141184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.141194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.141556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.141916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.141926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.142306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.142516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.142526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.142598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.142789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.142801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.143165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.143340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.143354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.485 [2024-04-15 22:58:59.143706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.144073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.485 [2024-04-15 22:58:59.144083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.485 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.144240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.144468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.144478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.144839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.145119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.145131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.145483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.145870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.145881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.146284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.146635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.146646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.147033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.147267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.147277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.147642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.147866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.147876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.148101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.148480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.148490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.148882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.149267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.149279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.149658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.149840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.149851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.150223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.150456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.150467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.150837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.151074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.151084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.151489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.151814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.151824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.152209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.152564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.152575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.152937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.153333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.153343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.153731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.153951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.153961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.154313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.154707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.154718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.154946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.155288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.155298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.155520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.155905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.155917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.156298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.156649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.156660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.157044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.157362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.157373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.157573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.157942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.157953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.158316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.158710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.158721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.158950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.159175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.159185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.159493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.159690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.159701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.486 qpair failed and we were unable to recover it. 00:32:14.486 [2024-04-15 22:58:59.159930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.486 [2024-04-15 22:58:59.160290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.160300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.160660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.161024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.161034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.161453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.161606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.161618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.161939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.161995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.162005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.162344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.162576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.162586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.162954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.163020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.163029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.163351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.163699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.163711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 22:58:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:14.487 [2024-04-15 22:58:59.164076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 22:58:59 -- common/autotest_common.sh@852 -- # return 0 00:32:14.487 [2024-04-15 22:58:59.164291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.164301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 22:58:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:14.487 [2024-04-15 22:58:59.164661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 22:58:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:14.487 [2024-04-15 22:58:59.164835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.164846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 22:58:59 -- common/autotest_common.sh@10 -- # set +x 00:32:14.487 [2024-04-15 22:58:59.165219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.165288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.165297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.165461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.165694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.165704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.165929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.166327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.166337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.166674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.167054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.167064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.167287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.167641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.167651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.167955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.168221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.168232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.168591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.168813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.168824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.169186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.169428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.169438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.169805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.169982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.169992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.170370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.170593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.170604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.170665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.171021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.171031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.171261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.171498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.171508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.171891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.172243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.172253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.172463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.172667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.172677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.172911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.173282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.173292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.173664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.173905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.173915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.174291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.174618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.174629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.175021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.175355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.175366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.175737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.176135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.176146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.487 [2024-04-15 22:58:59.176409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.176646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.487 [2024-04-15 22:58:59.176656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.487 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.176730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.177065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.177075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.177287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.177623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.177634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.178049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.178244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.178255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.178450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.178834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.178845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.179051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.179389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.179401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.179601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.179929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.179939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.180300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.180660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.180670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.181012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.181405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.181416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.181592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.181726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.181737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.181900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.182070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.182081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.182392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.182767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.182778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.183002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.183353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.183365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.183631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.183922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.183934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.184308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.184692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.184705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.185072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.185389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.185401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.185757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.186149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.186160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.186353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.186550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.186561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.186848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.187237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.187248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.187613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.187944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.187955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.188258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.188456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.188466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.188858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.189225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.189236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.189613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.190463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.190489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.190869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.191105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.191115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.191494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.191703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.191713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.192081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.192269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.192278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.192633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.193005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.193017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.193308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.193703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.193717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.193903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.194248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.194259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.194315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.194554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.488 [2024-04-15 22:58:59.194565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.488 qpair failed and we were unable to recover it. 00:32:14.488 [2024-04-15 22:58:59.194808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.195196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.195207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.195521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.195899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.195910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.196286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.196621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.196632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.196860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.197091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.197103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.197489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.197687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.197697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.198047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.198438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.198448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.198759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.198969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.198980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.199261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.199473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.199484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.199860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.200255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.200266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.200494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.200853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.200865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.201070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.201434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.201446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.201620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.201965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.201977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.202357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.202708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.202719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 22:58:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:14.489 [2024-04-15 22:58:59.203070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 22:58:59 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:14.489 [2024-04-15 22:58:59.203306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.203317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 22:58:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.489 [2024-04-15 22:58:59.203650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 22:58:59 -- common/autotest_common.sh@10 -- # set +x 00:32:14.489 [2024-04-15 22:58:59.203818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.203828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.204193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.204411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.204421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.204775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.205162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.205172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.205527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.205762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.205773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.206030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.206216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.206226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.206598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.206956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.206967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.207373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.207739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.207750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.208108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.208455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.208465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.208659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.208975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.208986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.209346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.209553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.209563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.209935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.210179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.210189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.210550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.210913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.210924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.211144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.211366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.211378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.211752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.212144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.212156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.489 qpair failed and we were unable to recover it. 00:32:14.489 [2024-04-15 22:58:59.212538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.212730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.489 [2024-04-15 22:58:59.212740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.213108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.213452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.213464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.213829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.214222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.214232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.214591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.214829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.214839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.215061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.215276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.215287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.215645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.215871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.215881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.216086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.216448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.216459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.216694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.217076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.217086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.217296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.217664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.217675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.218041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.218372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.218386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.218728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.219107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.219117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.219461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.219640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.219649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.219960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.220146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.220156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.220356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 Malloc0 00:32:14.490 [2024-04-15 22:58:59.220760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.220772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 22:58:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.490 [2024-04-15 22:58:59.221155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 22:58:59 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:14.490 [2024-04-15 22:58:59.221557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.221569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 22:58:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.490 [2024-04-15 22:58:59.221945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 22:58:59 -- common/autotest_common.sh@10 -- # set +x 00:32:14.490 [2024-04-15 22:58:59.222152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.222162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.222362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.222634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.222645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.222868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.223229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.223239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.223619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.223917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.223929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.224123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.224496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.224507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.224887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.225283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.225293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.225661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.225972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.225983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.226330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.226697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.226708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.226961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.227172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.227183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.227483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.227696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.490 [2024-04-15 22:58:59.227707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.490 qpair failed and we were unable to recover it. 00:32:14.490 [2024-04-15 22:58:59.227884] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:14.491 [2024-04-15 22:58:59.227921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.228293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.228305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.228528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.228899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.228911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.229130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.229471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.229482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.229708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.229943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.229954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.230175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.230547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.230559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.230888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.231287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.231297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.231659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.231895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.231906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.232255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.232611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.232623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.233024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.233369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.233379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.233594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.233935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.233946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.234304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.234383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.234391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.234763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.234979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.234989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.235212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.235599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.235609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.235973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.236161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.236171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.236527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.236610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.236620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 22:58:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.491 [2024-04-15 22:58:59.236954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 22:58:59 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:14.491 [2024-04-15 22:58:59.237300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.237310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 22:58:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.491 [2024-04-15 22:58:59.237736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 22:58:59 -- common/autotest_common.sh@10 -- # set +x 00:32:14.491 [2024-04-15 22:58:59.238039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.238050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.238396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.238610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.238622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.238852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.239245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.239255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.239634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.239854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.239863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.239927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.240095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.240105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.240313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.240516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.240529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.240914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.241129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.241141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.241505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.241866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.241878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.242086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.242440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.242451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.242810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.243201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.243212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.243412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.243731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.243741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.244122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.244324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.244335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.491 qpair failed and we were unable to recover it. 00:32:14.491 [2024-04-15 22:58:59.244694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.491 [2024-04-15 22:58:59.245061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.245072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.245451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.245635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.245645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.245966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.246199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.246209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.246563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.246935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.246946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.247349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.247732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.247743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.247955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.248200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.248211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.248464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.248822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.248833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 22:58:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.492 [2024-04-15 22:58:59.249183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 22:58:59 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:14.492 [2024-04-15 22:58:59.249532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.249546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 22:58:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.492 22:58:59 -- common/autotest_common.sh@10 -- # set +x 00:32:14.492 [2024-04-15 22:58:59.249942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.250284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.250295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.250668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.251025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.251036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.251436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.251751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.251762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.252128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.252405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.252416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.252766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.253114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.253124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.253498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.253821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.253831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.253897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.254234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.254245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.254622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.255010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.255023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.255463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.255677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.255688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.256021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.256249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.256259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.256503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.256844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.256855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.257274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.257528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.257539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.257719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.258099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.258110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.258447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.258808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.258820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.259181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.259571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.259582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.259925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.260279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.260289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.260650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.261000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 22:58:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.492 [2024-04-15 22:58:59.261011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.261211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 22:58:59 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:14.492 [2024-04-15 22:58:59.261421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.261431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 22:58:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.492 [2024-04-15 22:58:59.261790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 22:58:59 -- common/autotest_common.sh@10 -- # set +x 00:32:14.492 [2024-04-15 22:58:59.262067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.262077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.492 [2024-04-15 22:58:59.262431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.262795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.492 [2024-04-15 22:58:59.262806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.492 qpair failed and we were unable to recover it. 00:32:14.493 [2024-04-15 22:58:59.263169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.493 [2024-04-15 22:58:59.263565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.493 [2024-04-15 22:58:59.263576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.493 qpair failed and we were unable to recover it. 00:32:14.493 [2024-04-15 22:58:59.263798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.493 [2024-04-15 22:58:59.264164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.493 [2024-04-15 22:58:59.264174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.493 qpair failed and we were unable to recover it. 00:32:14.493 [2024-04-15 22:58:59.264566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.493 [2024-04-15 22:58:59.264921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.493 [2024-04-15 22:58:59.264932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.493 qpair failed and we were unable to recover it. 00:32:14.493 [2024-04-15 22:58:59.265267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.493 [2024-04-15 22:58:59.265456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.493 [2024-04-15 22:58:59.265467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.493 qpair failed and we were unable to recover it. 00:32:14.493 [2024-04-15 22:58:59.265673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.493 [2024-04-15 22:58:59.266033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.493 [2024-04-15 22:58:59.266043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.493 qpair failed and we were unable to recover it. 00:32:14.493 [2024-04-15 22:58:59.266434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.493 [2024-04-15 22:58:59.266630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.493 [2024-04-15 22:58:59.266640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.493 qpair failed and we were unable to recover it. 00:32:14.493 [2024-04-15 22:58:59.266801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.493 [2024-04-15 22:58:59.267001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.493 [2024-04-15 22:58:59.267011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.493 qpair failed and we were unable to recover it. 00:32:14.493 [2024-04-15 22:58:59.267396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.493 [2024-04-15 22:58:59.267695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.493 [2024-04-15 22:58:59.267707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f8b0 with addr=10.0.0.2, port=4420 00:32:14.493 qpair failed and we were unable to recover it. 00:32:14.493 [2024-04-15 22:58:59.268069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.493 [2024-04-15 22:58:59.268150] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.493 [2024-04-15 22:58:59.270581] posix.c: 670:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:32:14.493 [2024-04-15 22:58:59.270629] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f8b0 (107): Transport endpoint is not connected 00:32:14.493 [2024-04-15 22:58:59.270681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.493 qpair failed and we were unable to recover it. 00:32:14.493 22:58:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.493 22:58:59 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:14.493 22:58:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.493 22:58:59 -- common/autotest_common.sh@10 -- # set +x 00:32:14.756 [2024-04-15 22:58:59.278793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.756 [2024-04-15 22:58:59.278885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.756 [2024-04-15 22:58:59.278905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.756 [2024-04-15 22:58:59.278914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.756 [2024-04-15 22:58:59.278922] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.756 [2024-04-15 22:58:59.278939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.756 qpair failed and we were unable to recover it. 00:32:14.756 22:58:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.756 22:58:59 -- host/target_disconnect.sh@58 -- # wait 1337924 00:32:14.756 [2024-04-15 22:58:59.288672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.756 [2024-04-15 22:58:59.288750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.756 [2024-04-15 22:58:59.288767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.756 [2024-04-15 22:58:59.288775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.756 [2024-04-15 22:58:59.288781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.756 [2024-04-15 22:58:59.288797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.756 qpair failed and we were unable to recover it. 00:32:14.756 [2024-04-15 22:58:59.298698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.756 [2024-04-15 22:58:59.298776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.756 [2024-04-15 22:58:59.298792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.756 [2024-04-15 22:58:59.298799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.756 [2024-04-15 22:58:59.298806] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.756 [2024-04-15 22:58:59.298820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.756 qpair failed and we were unable to recover it. 00:32:14.756 [2024-04-15 22:58:59.308668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.756 [2024-04-15 22:58:59.308800] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.756 [2024-04-15 22:58:59.308818] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.756 [2024-04-15 22:58:59.308826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.756 [2024-04-15 22:58:59.308833] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.756 [2024-04-15 22:58:59.308848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.756 qpair failed and we were unable to recover it. 00:32:14.756 [2024-04-15 22:58:59.318700] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.756 [2024-04-15 22:58:59.318768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.757 [2024-04-15 22:58:59.318784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.757 [2024-04-15 22:58:59.318791] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.757 [2024-04-15 22:58:59.318797] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.757 [2024-04-15 22:58:59.318812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.757 qpair failed and we were unable to recover it. 00:32:14.757 [2024-04-15 22:58:59.328613] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.757 [2024-04-15 22:58:59.328682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.757 [2024-04-15 22:58:59.328699] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.757 [2024-04-15 22:58:59.328707] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.757 [2024-04-15 22:58:59.328713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.757 [2024-04-15 22:58:59.328728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.757 qpair failed and we were unable to recover it. 00:32:14.757 [2024-04-15 22:58:59.338725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.757 [2024-04-15 22:58:59.338800] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.757 [2024-04-15 22:58:59.338815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.757 [2024-04-15 22:58:59.338823] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.757 [2024-04-15 22:58:59.338829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.757 [2024-04-15 22:58:59.338843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.757 qpair failed and we were unable to recover it. 00:32:14.757 [2024-04-15 22:58:59.348808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.757 [2024-04-15 22:58:59.348910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.757 [2024-04-15 22:58:59.348929] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.757 [2024-04-15 22:58:59.348936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.757 [2024-04-15 22:58:59.348944] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.757 [2024-04-15 22:58:59.348958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.757 qpair failed and we were unable to recover it. 00:32:14.757 [2024-04-15 22:58:59.358674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.757 [2024-04-15 22:58:59.358743] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.757 [2024-04-15 22:58:59.358758] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.757 [2024-04-15 22:58:59.358765] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.757 [2024-04-15 22:58:59.358772] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.757 [2024-04-15 22:58:59.358785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.757 qpair failed and we were unable to recover it. 00:32:14.757 [2024-04-15 22:58:59.368851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.757 [2024-04-15 22:58:59.368922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.757 [2024-04-15 22:58:59.368937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.757 [2024-04-15 22:58:59.368945] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.757 [2024-04-15 22:58:59.368951] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.757 [2024-04-15 22:58:59.368966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.757 qpair failed and we were unable to recover it. 00:32:14.757 [2024-04-15 22:58:59.378840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.757 [2024-04-15 22:58:59.378910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.757 [2024-04-15 22:58:59.378925] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.757 [2024-04-15 22:58:59.378932] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.757 [2024-04-15 22:58:59.378939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.757 [2024-04-15 22:58:59.378952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.757 qpair failed and we were unable to recover it. 00:32:14.757 [2024-04-15 22:58:59.388860] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.757 [2024-04-15 22:58:59.388943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.757 [2024-04-15 22:58:59.388958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.757 [2024-04-15 22:58:59.388966] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.757 [2024-04-15 22:58:59.388973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.757 [2024-04-15 22:58:59.388987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.757 qpair failed and we were unable to recover it. 00:32:14.757 [2024-04-15 22:58:59.398904] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.757 [2024-04-15 22:58:59.398969] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.757 [2024-04-15 22:58:59.398984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.757 [2024-04-15 22:58:59.398992] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.757 [2024-04-15 22:58:59.398998] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.757 [2024-04-15 22:58:59.399012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.757 qpair failed and we were unable to recover it. 00:32:14.757 [2024-04-15 22:58:59.408889] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.757 [2024-04-15 22:58:59.408961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.757 [2024-04-15 22:58:59.408976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.757 [2024-04-15 22:58:59.408984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.757 [2024-04-15 22:58:59.408991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.757 [2024-04-15 22:58:59.409005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.757 qpair failed and we were unable to recover it. 00:32:14.757 [2024-04-15 22:58:59.418838] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.757 [2024-04-15 22:58:59.418937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.757 [2024-04-15 22:58:59.418952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.757 [2024-04-15 22:58:59.418960] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.757 [2024-04-15 22:58:59.418966] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.757 [2024-04-15 22:58:59.418980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.757 qpair failed and we were unable to recover it. 00:32:14.757 [2024-04-15 22:58:59.428981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.757 [2024-04-15 22:58:59.429056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.757 [2024-04-15 22:58:59.429071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.757 [2024-04-15 22:58:59.429078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.757 [2024-04-15 22:58:59.429084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.757 [2024-04-15 22:58:59.429099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.757 qpair failed and we were unable to recover it. 00:32:14.757 [2024-04-15 22:58:59.439005] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.757 [2024-04-15 22:58:59.439070] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.757 [2024-04-15 22:58:59.439088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.757 [2024-04-15 22:58:59.439096] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.758 [2024-04-15 22:58:59.439102] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.758 [2024-04-15 22:58:59.439117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.758 qpair failed and we were unable to recover it. 00:32:14.758 [2024-04-15 22:58:59.449036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.758 [2024-04-15 22:58:59.449132] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.758 [2024-04-15 22:58:59.449148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.758 [2024-04-15 22:58:59.449155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.758 [2024-04-15 22:58:59.449162] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.758 [2024-04-15 22:58:59.449175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.758 qpair failed and we were unable to recover it. 00:32:14.758 [2024-04-15 22:58:59.459071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.758 [2024-04-15 22:58:59.459143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.758 [2024-04-15 22:58:59.459158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.758 [2024-04-15 22:58:59.459166] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.758 [2024-04-15 22:58:59.459173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.758 [2024-04-15 22:58:59.459186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.758 qpair failed and we were unable to recover it. 00:32:14.758 [2024-04-15 22:58:59.469088] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.758 [2024-04-15 22:58:59.469172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.758 [2024-04-15 22:58:59.469187] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.758 [2024-04-15 22:58:59.469195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.758 [2024-04-15 22:58:59.469201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.758 [2024-04-15 22:58:59.469214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.758 qpair failed and we were unable to recover it. 00:32:14.758 [2024-04-15 22:58:59.479127] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.758 [2024-04-15 22:58:59.479216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.758 [2024-04-15 22:58:59.479235] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.758 [2024-04-15 22:58:59.479242] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.758 [2024-04-15 22:58:59.479249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.758 [2024-04-15 22:58:59.479267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.758 qpair failed and we were unable to recover it. 00:32:14.758 [2024-04-15 22:58:59.489156] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.758 [2024-04-15 22:58:59.489231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.758 [2024-04-15 22:58:59.489256] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.758 [2024-04-15 22:58:59.489266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.758 [2024-04-15 22:58:59.489273] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.758 [2024-04-15 22:58:59.489292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.758 qpair failed and we were unable to recover it. 00:32:14.758 [2024-04-15 22:58:59.499183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.758 [2024-04-15 22:58:59.499258] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.758 [2024-04-15 22:58:59.499283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.758 [2024-04-15 22:58:59.499292] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.758 [2024-04-15 22:58:59.499300] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.758 [2024-04-15 22:58:59.499318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.758 qpair failed and we were unable to recover it. 00:32:14.758 [2024-04-15 22:58:59.509293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.758 [2024-04-15 22:58:59.509365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.758 [2024-04-15 22:58:59.509382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.758 [2024-04-15 22:58:59.509389] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.758 [2024-04-15 22:58:59.509396] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.758 [2024-04-15 22:58:59.509411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.758 qpair failed and we were unable to recover it. 00:32:14.758 [2024-04-15 22:58:59.519221] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.758 [2024-04-15 22:58:59.519293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.758 [2024-04-15 22:58:59.519310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.758 [2024-04-15 22:58:59.519317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.758 [2024-04-15 22:58:59.519324] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.758 [2024-04-15 22:58:59.519338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.758 qpair failed and we were unable to recover it. 00:32:14.758 [2024-04-15 22:58:59.529282] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.758 [2024-04-15 22:58:59.529355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.758 [2024-04-15 22:58:59.529375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.758 [2024-04-15 22:58:59.529383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.758 [2024-04-15 22:58:59.529389] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.758 [2024-04-15 22:58:59.529403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.758 qpair failed and we were unable to recover it. 00:32:14.758 [2024-04-15 22:58:59.539284] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.758 [2024-04-15 22:58:59.539353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.758 [2024-04-15 22:58:59.539368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.758 [2024-04-15 22:58:59.539376] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.758 [2024-04-15 22:58:59.539382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.758 [2024-04-15 22:58:59.539395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.758 qpair failed and we were unable to recover it. 00:32:14.758 [2024-04-15 22:58:59.549348] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.758 [2024-04-15 22:58:59.549444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.758 [2024-04-15 22:58:59.549460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.758 [2024-04-15 22:58:59.549467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.758 [2024-04-15 22:58:59.549474] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.758 [2024-04-15 22:58:59.549487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.758 qpair failed and we were unable to recover it. 00:32:14.758 [2024-04-15 22:58:59.559338] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.758 [2024-04-15 22:58:59.559413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.758 [2024-04-15 22:58:59.559429] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.758 [2024-04-15 22:58:59.559436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.758 [2024-04-15 22:58:59.559442] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:14.758 [2024-04-15 22:58:59.559457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.758 qpair failed and we were unable to recover it. 00:32:15.021 [2024-04-15 22:58:59.569368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.021 [2024-04-15 22:58:59.569449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.021 [2024-04-15 22:58:59.569465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.021 [2024-04-15 22:58:59.569472] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.021 [2024-04-15 22:58:59.569479] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.022 [2024-04-15 22:58:59.569496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.022 qpair failed and we were unable to recover it. 00:32:15.022 [2024-04-15 22:58:59.579481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.022 [2024-04-15 22:58:59.579554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.022 [2024-04-15 22:58:59.579571] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.022 [2024-04-15 22:58:59.579579] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.022 [2024-04-15 22:58:59.579585] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.022 [2024-04-15 22:58:59.579600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.022 qpair failed and we were unable to recover it. 00:32:15.022 [2024-04-15 22:58:59.589508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.022 [2024-04-15 22:58:59.589591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.022 [2024-04-15 22:58:59.589607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.022 [2024-04-15 22:58:59.589614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.022 [2024-04-15 22:58:59.589621] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.022 [2024-04-15 22:58:59.589635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.022 qpair failed and we were unable to recover it. 00:32:15.022 [2024-04-15 22:58:59.599515] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.022 [2024-04-15 22:58:59.599588] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.022 [2024-04-15 22:58:59.599604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.022 [2024-04-15 22:58:59.599612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.022 [2024-04-15 22:58:59.599618] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.022 [2024-04-15 22:58:59.599633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.022 qpair failed and we were unable to recover it. 00:32:15.022 [2024-04-15 22:58:59.609507] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.022 [2024-04-15 22:58:59.609582] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.022 [2024-04-15 22:58:59.609598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.022 [2024-04-15 22:58:59.609605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.022 [2024-04-15 22:58:59.609612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.022 [2024-04-15 22:58:59.609626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.022 qpair failed and we were unable to recover it. 00:32:15.022 [2024-04-15 22:58:59.619525] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.022 [2024-04-15 22:58:59.619595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.022 [2024-04-15 22:58:59.619613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.022 [2024-04-15 22:58:59.619621] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.022 [2024-04-15 22:58:59.619627] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.022 [2024-04-15 22:58:59.619642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.022 qpair failed and we were unable to recover it. 00:32:15.022 [2024-04-15 22:58:59.629581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.022 [2024-04-15 22:58:59.629651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.022 [2024-04-15 22:58:59.629666] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.022 [2024-04-15 22:58:59.629674] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.022 [2024-04-15 22:58:59.629680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.022 [2024-04-15 22:58:59.629693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.022 qpair failed and we were unable to recover it. 00:32:15.022 [2024-04-15 22:58:59.639468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.022 [2024-04-15 22:58:59.639584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.022 [2024-04-15 22:58:59.639601] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.022 [2024-04-15 22:58:59.639609] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.022 [2024-04-15 22:58:59.639616] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.022 [2024-04-15 22:58:59.639630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.022 qpair failed and we were unable to recover it. 00:32:15.022 [2024-04-15 22:58:59.649576] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.022 [2024-04-15 22:58:59.649642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.022 [2024-04-15 22:58:59.649657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.022 [2024-04-15 22:58:59.649664] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.022 [2024-04-15 22:58:59.649671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.022 [2024-04-15 22:58:59.649684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.022 qpair failed and we were unable to recover it. 00:32:15.022 [2024-04-15 22:58:59.659651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.022 [2024-04-15 22:58:59.659723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.022 [2024-04-15 22:58:59.659739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.022 [2024-04-15 22:58:59.659746] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.022 [2024-04-15 22:58:59.659752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.022 [2024-04-15 22:58:59.659769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.022 qpair failed and we were unable to recover it. 00:32:15.022 [2024-04-15 22:58:59.669681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.022 [2024-04-15 22:58:59.669771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.022 [2024-04-15 22:58:59.669786] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.022 [2024-04-15 22:58:59.669794] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.022 [2024-04-15 22:58:59.669801] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.022 [2024-04-15 22:58:59.669814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.022 qpair failed and we were unable to recover it. 00:32:15.022 [2024-04-15 22:58:59.679703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.022 [2024-04-15 22:58:59.679777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.022 [2024-04-15 22:58:59.679792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.022 [2024-04-15 22:58:59.679799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.022 [2024-04-15 22:58:59.679806] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.022 [2024-04-15 22:58:59.679819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.022 qpair failed and we were unable to recover it. 00:32:15.022 [2024-04-15 22:58:59.689718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.022 [2024-04-15 22:58:59.689782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.022 [2024-04-15 22:58:59.689797] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.022 [2024-04-15 22:58:59.689804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.022 [2024-04-15 22:58:59.689810] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.022 [2024-04-15 22:58:59.689823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.022 qpair failed and we were unable to recover it. 00:32:15.022 [2024-04-15 22:58:59.699781] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.022 [2024-04-15 22:58:59.699849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.022 [2024-04-15 22:58:59.699863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.022 [2024-04-15 22:58:59.699871] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.022 [2024-04-15 22:58:59.699877] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.022 [2024-04-15 22:58:59.699890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.022 qpair failed and we were unable to recover it. 00:32:15.022 [2024-04-15 22:58:59.709810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.023 [2024-04-15 22:58:59.709888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.023 [2024-04-15 22:58:59.709906] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.023 [2024-04-15 22:58:59.709913] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.023 [2024-04-15 22:58:59.709920] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.023 [2024-04-15 22:58:59.709933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.023 qpair failed and we were unable to recover it. 00:32:15.023 [2024-04-15 22:58:59.719836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.023 [2024-04-15 22:58:59.719906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.023 [2024-04-15 22:58:59.719922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.023 [2024-04-15 22:58:59.719929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.023 [2024-04-15 22:58:59.719935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.023 [2024-04-15 22:58:59.719949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.023 qpair failed and we were unable to recover it. 00:32:15.023 [2024-04-15 22:58:59.729859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.023 [2024-04-15 22:58:59.729929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.023 [2024-04-15 22:58:59.729945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.023 [2024-04-15 22:58:59.729952] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.023 [2024-04-15 22:58:59.729958] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.023 [2024-04-15 22:58:59.729972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.023 qpair failed and we were unable to recover it. 00:32:15.023 [2024-04-15 22:58:59.739873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.023 [2024-04-15 22:58:59.739946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.023 [2024-04-15 22:58:59.739960] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.023 [2024-04-15 22:58:59.739967] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.023 [2024-04-15 22:58:59.739974] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.023 [2024-04-15 22:58:59.739987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.023 qpair failed and we were unable to recover it. 00:32:15.023 [2024-04-15 22:58:59.749908] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.023 [2024-04-15 22:58:59.749983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.023 [2024-04-15 22:58:59.749998] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.023 [2024-04-15 22:58:59.750006] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.023 [2024-04-15 22:58:59.750012] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.023 [2024-04-15 22:58:59.750029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.023 qpair failed and we were unable to recover it. 00:32:15.023 [2024-04-15 22:58:59.759815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.023 [2024-04-15 22:58:59.759886] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.023 [2024-04-15 22:58:59.759901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.023 [2024-04-15 22:58:59.759908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.023 [2024-04-15 22:58:59.759915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.023 [2024-04-15 22:58:59.759929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.023 qpair failed and we were unable to recover it. 00:32:15.023 [2024-04-15 22:58:59.769967] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.023 [2024-04-15 22:58:59.770038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.023 [2024-04-15 22:58:59.770053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.023 [2024-04-15 22:58:59.770060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.023 [2024-04-15 22:58:59.770066] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.023 [2024-04-15 22:58:59.770080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.023 qpair failed and we were unable to recover it. 00:32:15.023 [2024-04-15 22:58:59.779999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.023 [2024-04-15 22:58:59.780067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.023 [2024-04-15 22:58:59.780082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.023 [2024-04-15 22:58:59.780090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.023 [2024-04-15 22:58:59.780096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.023 [2024-04-15 22:58:59.780109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.023 qpair failed and we were unable to recover it. 00:32:15.023 [2024-04-15 22:58:59.790022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.023 [2024-04-15 22:58:59.790095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.023 [2024-04-15 22:58:59.790110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.023 [2024-04-15 22:58:59.790117] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.023 [2024-04-15 22:58:59.790123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.023 [2024-04-15 22:58:59.790137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.023 qpair failed and we were unable to recover it. 00:32:15.023 [2024-04-15 22:58:59.800044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.023 [2024-04-15 22:58:59.800111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.023 [2024-04-15 22:58:59.800133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.023 [2024-04-15 22:58:59.800140] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.023 [2024-04-15 22:58:59.800148] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.023 [2024-04-15 22:58:59.800161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.023 qpair failed and we were unable to recover it. 00:32:15.023 [2024-04-15 22:58:59.810082] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.023 [2024-04-15 22:58:59.810157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.023 [2024-04-15 22:58:59.810171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.023 [2024-04-15 22:58:59.810179] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.023 [2024-04-15 22:58:59.810186] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.023 [2024-04-15 22:58:59.810199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.023 qpair failed and we were unable to recover it. 00:32:15.023 [2024-04-15 22:58:59.820105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.023 [2024-04-15 22:58:59.820170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.023 [2024-04-15 22:58:59.820187] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.023 [2024-04-15 22:58:59.820194] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.023 [2024-04-15 22:58:59.820200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.023 [2024-04-15 22:58:59.820215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.023 qpair failed and we were unable to recover it. 00:32:15.287 [2024-04-15 22:58:59.830144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.287 [2024-04-15 22:58:59.830220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.287 [2024-04-15 22:58:59.830245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.287 [2024-04-15 22:58:59.830255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.287 [2024-04-15 22:58:59.830262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.287 [2024-04-15 22:58:59.830280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.287 qpair failed and we were unable to recover it. 00:32:15.287 [2024-04-15 22:58:59.840148] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.287 [2024-04-15 22:58:59.840214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.287 [2024-04-15 22:58:59.840231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.287 [2024-04-15 22:58:59.840239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.287 [2024-04-15 22:58:59.840246] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.287 [2024-04-15 22:58:59.840266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.287 qpair failed and we were unable to recover it. 00:32:15.287 [2024-04-15 22:58:59.850236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.287 [2024-04-15 22:58:59.850303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.287 [2024-04-15 22:58:59.850319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.287 [2024-04-15 22:58:59.850327] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.287 [2024-04-15 22:58:59.850333] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.287 [2024-04-15 22:58:59.850348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.287 qpair failed and we were unable to recover it. 00:32:15.287 [2024-04-15 22:58:59.860110] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.287 [2024-04-15 22:58:59.860181] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.287 [2024-04-15 22:58:59.860196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.287 [2024-04-15 22:58:59.860204] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.287 [2024-04-15 22:58:59.860210] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.287 [2024-04-15 22:58:59.860224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.287 qpair failed and we were unable to recover it. 00:32:15.287 [2024-04-15 22:58:59.870253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.287 [2024-04-15 22:58:59.870324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.287 [2024-04-15 22:58:59.870339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.287 [2024-04-15 22:58:59.870347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.287 [2024-04-15 22:58:59.870353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.287 [2024-04-15 22:58:59.870366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.287 qpair failed and we were unable to recover it. 00:32:15.287 [2024-04-15 22:58:59.880286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.287 [2024-04-15 22:58:59.880359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.287 [2024-04-15 22:58:59.880374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.287 [2024-04-15 22:58:59.880381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.287 [2024-04-15 22:58:59.880388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.287 [2024-04-15 22:58:59.880402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.287 qpair failed and we were unable to recover it. 00:32:15.287 [2024-04-15 22:58:59.890312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.287 [2024-04-15 22:58:59.890374] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.287 [2024-04-15 22:58:59.890393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.287 [2024-04-15 22:58:59.890401] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.287 [2024-04-15 22:58:59.890407] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.287 [2024-04-15 22:58:59.890421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.287 qpair failed and we were unable to recover it. 00:32:15.287 [2024-04-15 22:58:59.900374] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.287 [2024-04-15 22:58:59.900441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.287 [2024-04-15 22:58:59.900456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.287 [2024-04-15 22:58:59.900463] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.288 [2024-04-15 22:58:59.900469] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.288 [2024-04-15 22:58:59.900483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.288 qpair failed and we were unable to recover it. 00:32:15.288 [2024-04-15 22:58:59.910272] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.288 [2024-04-15 22:58:59.910344] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.288 [2024-04-15 22:58:59.910361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.288 [2024-04-15 22:58:59.910369] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.288 [2024-04-15 22:58:59.910375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.288 [2024-04-15 22:58:59.910390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.288 qpair failed and we were unable to recover it. 00:32:15.288 [2024-04-15 22:58:59.920405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.288 [2024-04-15 22:58:59.920479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.288 [2024-04-15 22:58:59.920495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.288 [2024-04-15 22:58:59.920503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.288 [2024-04-15 22:58:59.920510] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.288 [2024-04-15 22:58:59.920523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.288 qpair failed and we were unable to recover it. 00:32:15.288 [2024-04-15 22:58:59.930326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.288 [2024-04-15 22:58:59.930425] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.288 [2024-04-15 22:58:59.930441] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.288 [2024-04-15 22:58:59.930449] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.288 [2024-04-15 22:58:59.930459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.288 [2024-04-15 22:58:59.930472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.288 qpair failed and we were unable to recover it. 00:32:15.288 [2024-04-15 22:58:59.940471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.288 [2024-04-15 22:58:59.940538] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.288 [2024-04-15 22:58:59.940558] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.288 [2024-04-15 22:58:59.940565] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.288 [2024-04-15 22:58:59.940571] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.288 [2024-04-15 22:58:59.940586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.288 qpair failed and we were unable to recover it. 00:32:15.288 [2024-04-15 22:58:59.950386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.288 [2024-04-15 22:58:59.950462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.288 [2024-04-15 22:58:59.950477] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.288 [2024-04-15 22:58:59.950484] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.288 [2024-04-15 22:58:59.950490] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.288 [2024-04-15 22:58:59.950504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.288 qpair failed and we were unable to recover it. 00:32:15.288 [2024-04-15 22:58:59.960509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.288 [2024-04-15 22:58:59.960641] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.288 [2024-04-15 22:58:59.960657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.288 [2024-04-15 22:58:59.960664] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.288 [2024-04-15 22:58:59.960671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.288 [2024-04-15 22:58:59.960684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.288 qpair failed and we were unable to recover it. 00:32:15.288 [2024-04-15 22:58:59.970565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.288 [2024-04-15 22:58:59.970630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.288 [2024-04-15 22:58:59.970645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.288 [2024-04-15 22:58:59.970653] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.288 [2024-04-15 22:58:59.970659] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.288 [2024-04-15 22:58:59.970673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.288 qpair failed and we were unable to recover it. 00:32:15.288 [2024-04-15 22:58:59.980592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.288 [2024-04-15 22:58:59.980660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.288 [2024-04-15 22:58:59.980678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.288 [2024-04-15 22:58:59.980686] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.288 [2024-04-15 22:58:59.980692] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.288 [2024-04-15 22:58:59.980707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.288 qpair failed and we were unable to recover it. 00:32:15.288 [2024-04-15 22:58:59.990621] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.288 [2024-04-15 22:58:59.990727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.288 [2024-04-15 22:58:59.990742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.288 [2024-04-15 22:58:59.990750] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.288 [2024-04-15 22:58:59.990756] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.288 [2024-04-15 22:58:59.990770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.288 qpair failed and we were unable to recover it. 00:32:15.288 [2024-04-15 22:59:00.000573] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.288 [2024-04-15 22:59:00.000644] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.288 [2024-04-15 22:59:00.000659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.288 [2024-04-15 22:59:00.000667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.288 [2024-04-15 22:59:00.000674] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.289 [2024-04-15 22:59:00.000688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.289 qpair failed and we were unable to recover it. 00:32:15.289 [2024-04-15 22:59:00.010706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.289 [2024-04-15 22:59:00.010783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.289 [2024-04-15 22:59:00.010800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.289 [2024-04-15 22:59:00.010808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.289 [2024-04-15 22:59:00.010814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.289 [2024-04-15 22:59:00.010829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.289 qpair failed and we were unable to recover it. 00:32:15.289 [2024-04-15 22:59:00.020715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.289 [2024-04-15 22:59:00.020803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.289 [2024-04-15 22:59:00.020820] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.289 [2024-04-15 22:59:00.020828] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.289 [2024-04-15 22:59:00.020838] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.289 [2024-04-15 22:59:00.020852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.289 qpair failed and we were unable to recover it. 00:32:15.289 [2024-04-15 22:59:00.030635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.289 [2024-04-15 22:59:00.030771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.289 [2024-04-15 22:59:00.030788] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.289 [2024-04-15 22:59:00.030795] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.289 [2024-04-15 22:59:00.030801] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.289 [2024-04-15 22:59:00.030815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.289 qpair failed and we were unable to recover it. 00:32:15.289 [2024-04-15 22:59:00.040781] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.289 [2024-04-15 22:59:00.040856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.289 [2024-04-15 22:59:00.040871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.289 [2024-04-15 22:59:00.040878] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.289 [2024-04-15 22:59:00.040885] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.289 [2024-04-15 22:59:00.040898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.289 qpair failed and we were unable to recover it. 00:32:15.289 [2024-04-15 22:59:00.050695] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.289 [2024-04-15 22:59:00.050767] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.289 [2024-04-15 22:59:00.050783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.289 [2024-04-15 22:59:00.050790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.289 [2024-04-15 22:59:00.050797] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.289 [2024-04-15 22:59:00.050811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.289 qpair failed and we were unable to recover it. 00:32:15.289 [2024-04-15 22:59:00.060848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.289 [2024-04-15 22:59:00.060931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.289 [2024-04-15 22:59:00.060946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.289 [2024-04-15 22:59:00.060954] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.289 [2024-04-15 22:59:00.060961] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.289 [2024-04-15 22:59:00.060974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.289 qpair failed and we were unable to recover it. 00:32:15.289 [2024-04-15 22:59:00.070864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.289 [2024-04-15 22:59:00.070998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.289 [2024-04-15 22:59:00.071014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.289 [2024-04-15 22:59:00.071022] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.289 [2024-04-15 22:59:00.071028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.289 [2024-04-15 22:59:00.071041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.289 qpair failed and we were unable to recover it. 00:32:15.289 [2024-04-15 22:59:00.080760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.289 [2024-04-15 22:59:00.080837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.289 [2024-04-15 22:59:00.080852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.289 [2024-04-15 22:59:00.080860] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.289 [2024-04-15 22:59:00.080866] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.289 [2024-04-15 22:59:00.080880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.289 qpair failed and we were unable to recover it. 00:32:15.289 [2024-04-15 22:59:00.090879] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.289 [2024-04-15 22:59:00.090961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.289 [2024-04-15 22:59:00.090977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.289 [2024-04-15 22:59:00.090984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.289 [2024-04-15 22:59:00.090991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.289 [2024-04-15 22:59:00.091005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.289 qpair failed and we were unable to recover it. 00:32:15.553 [2024-04-15 22:59:00.101023] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.553 [2024-04-15 22:59:00.101091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.553 [2024-04-15 22:59:00.101107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.553 [2024-04-15 22:59:00.101114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.553 [2024-04-15 22:59:00.101121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.553 [2024-04-15 22:59:00.101134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.553 qpair failed and we were unable to recover it. 00:32:15.553 [2024-04-15 22:59:00.110958] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.553 [2024-04-15 22:59:00.111055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.553 [2024-04-15 22:59:00.111071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.553 [2024-04-15 22:59:00.111078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.553 [2024-04-15 22:59:00.111089] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.553 [2024-04-15 22:59:00.111102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.553 qpair failed and we were unable to recover it. 00:32:15.553 [2024-04-15 22:59:00.120986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.553 [2024-04-15 22:59:00.121059] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.553 [2024-04-15 22:59:00.121075] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.553 [2024-04-15 22:59:00.121082] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.553 [2024-04-15 22:59:00.121089] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.553 [2024-04-15 22:59:00.121103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.553 qpair failed and we were unable to recover it. 00:32:15.553 [2024-04-15 22:59:00.131000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.553 [2024-04-15 22:59:00.131068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.553 [2024-04-15 22:59:00.131083] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.553 [2024-04-15 22:59:00.131091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.553 [2024-04-15 22:59:00.131097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.553 [2024-04-15 22:59:00.131111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.553 qpair failed and we were unable to recover it. 00:32:15.553 [2024-04-15 22:59:00.141040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.553 [2024-04-15 22:59:00.141109] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.553 [2024-04-15 22:59:00.141124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.553 [2024-04-15 22:59:00.141131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.553 [2024-04-15 22:59:00.141138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.553 [2024-04-15 22:59:00.141151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.553 qpair failed and we were unable to recover it. 00:32:15.553 [2024-04-15 22:59:00.150959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.553 [2024-04-15 22:59:00.151029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.553 [2024-04-15 22:59:00.151044] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.553 [2024-04-15 22:59:00.151051] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.553 [2024-04-15 22:59:00.151058] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.553 [2024-04-15 22:59:00.151071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.553 qpair failed and we were unable to recover it. 00:32:15.553 [2024-04-15 22:59:00.160991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.553 [2024-04-15 22:59:00.161068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.553 [2024-04-15 22:59:00.161083] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.553 [2024-04-15 22:59:00.161090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.553 [2024-04-15 22:59:00.161097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.553 [2024-04-15 22:59:00.161110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.553 qpair failed and we were unable to recover it. 00:32:15.553 [2024-04-15 22:59:00.171129] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.553 [2024-04-15 22:59:00.171200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.553 [2024-04-15 22:59:00.171215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.553 [2024-04-15 22:59:00.171223] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.553 [2024-04-15 22:59:00.171229] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.553 [2024-04-15 22:59:00.171242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.553 qpair failed and we were unable to recover it. 00:32:15.553 [2024-04-15 22:59:00.181158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.553 [2024-04-15 22:59:00.181228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.553 [2024-04-15 22:59:00.181243] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.553 [2024-04-15 22:59:00.181251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.553 [2024-04-15 22:59:00.181257] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.553 [2024-04-15 22:59:00.181270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.553 qpair failed and we were unable to recover it. 00:32:15.553 [2024-04-15 22:59:00.191197] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.553 [2024-04-15 22:59:00.191276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.553 [2024-04-15 22:59:00.191302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.553 [2024-04-15 22:59:00.191311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.553 [2024-04-15 22:59:00.191319] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.553 [2024-04-15 22:59:00.191338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.553 qpair failed and we were unable to recover it. 00:32:15.553 [2024-04-15 22:59:00.201224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.553 [2024-04-15 22:59:00.201301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.553 [2024-04-15 22:59:00.201327] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.553 [2024-04-15 22:59:00.201338] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.553 [2024-04-15 22:59:00.201349] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.553 [2024-04-15 22:59:00.201368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.553 qpair failed and we were unable to recover it. 00:32:15.553 [2024-04-15 22:59:00.211254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.553 [2024-04-15 22:59:00.211375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.553 [2024-04-15 22:59:00.211401] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.553 [2024-04-15 22:59:00.211409] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.553 [2024-04-15 22:59:00.211417] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.553 [2024-04-15 22:59:00.211435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.553 qpair failed and we were unable to recover it. 00:32:15.553 [2024-04-15 22:59:00.221289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.553 [2024-04-15 22:59:00.221361] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.554 [2024-04-15 22:59:00.221379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.554 [2024-04-15 22:59:00.221387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.554 [2024-04-15 22:59:00.221394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.554 [2024-04-15 22:59:00.221409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.554 qpair failed and we were unable to recover it. 00:32:15.554 [2024-04-15 22:59:00.231224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.554 [2024-04-15 22:59:00.231295] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.554 [2024-04-15 22:59:00.231311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.554 [2024-04-15 22:59:00.231318] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.554 [2024-04-15 22:59:00.231324] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.554 [2024-04-15 22:59:00.231338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.554 qpair failed and we were unable to recover it. 00:32:15.554 [2024-04-15 22:59:00.241403] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.554 [2024-04-15 22:59:00.241515] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.554 [2024-04-15 22:59:00.241531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.554 [2024-04-15 22:59:00.241538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.554 [2024-04-15 22:59:00.241552] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.554 [2024-04-15 22:59:00.241566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.554 qpair failed and we were unable to recover it. 00:32:15.554 [2024-04-15 22:59:00.251424] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.554 [2024-04-15 22:59:00.251511] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.554 [2024-04-15 22:59:00.251526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.554 [2024-04-15 22:59:00.251534] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.554 [2024-04-15 22:59:00.251540] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.554 [2024-04-15 22:59:00.251560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.554 qpair failed and we were unable to recover it. 00:32:15.554 [2024-04-15 22:59:00.261456] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.554 [2024-04-15 22:59:00.261538] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.554 [2024-04-15 22:59:00.261557] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.554 [2024-04-15 22:59:00.261565] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.554 [2024-04-15 22:59:00.261571] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.554 [2024-04-15 22:59:00.261586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.554 qpair failed and we were unable to recover it. 00:32:15.554 [2024-04-15 22:59:00.271426] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.554 [2024-04-15 22:59:00.271502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.554 [2024-04-15 22:59:00.271518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.554 [2024-04-15 22:59:00.271525] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.554 [2024-04-15 22:59:00.271531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.554 [2024-04-15 22:59:00.271551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.554 qpair failed and we were unable to recover it. 00:32:15.554 [2024-04-15 22:59:00.281469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.554 [2024-04-15 22:59:00.281538] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.554 [2024-04-15 22:59:00.281558] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.554 [2024-04-15 22:59:00.281565] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.554 [2024-04-15 22:59:00.281572] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.554 [2024-04-15 22:59:00.281586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.554 qpair failed and we were unable to recover it. 00:32:15.554 [2024-04-15 22:59:00.291382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.554 [2024-04-15 22:59:00.291446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.554 [2024-04-15 22:59:00.291462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.554 [2024-04-15 22:59:00.291469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.554 [2024-04-15 22:59:00.291480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.554 [2024-04-15 22:59:00.291495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.554 qpair failed and we were unable to recover it. 00:32:15.554 [2024-04-15 22:59:00.301540] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.554 [2024-04-15 22:59:00.301616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.554 [2024-04-15 22:59:00.301632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.554 [2024-04-15 22:59:00.301640] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.554 [2024-04-15 22:59:00.301646] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.554 [2024-04-15 22:59:00.301662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.554 qpair failed and we were unable to recover it. 00:32:15.554 [2024-04-15 22:59:00.311528] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.554 [2024-04-15 22:59:00.311612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.554 [2024-04-15 22:59:00.311628] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.554 [2024-04-15 22:59:00.311635] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.554 [2024-04-15 22:59:00.311643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.554 [2024-04-15 22:59:00.311657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.554 qpair failed and we were unable to recover it. 00:32:15.554 [2024-04-15 22:59:00.321576] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.554 [2024-04-15 22:59:00.321648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.554 [2024-04-15 22:59:00.321663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.554 [2024-04-15 22:59:00.321671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.554 [2024-04-15 22:59:00.321677] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.554 [2024-04-15 22:59:00.321692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.554 qpair failed and we were unable to recover it. 00:32:15.554 [2024-04-15 22:59:00.331533] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.554 [2024-04-15 22:59:00.331610] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.554 [2024-04-15 22:59:00.331626] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.554 [2024-04-15 22:59:00.331633] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.554 [2024-04-15 22:59:00.331640] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.554 [2024-04-15 22:59:00.331654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.554 qpair failed and we were unable to recover it. 00:32:15.554 [2024-04-15 22:59:00.341630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.554 [2024-04-15 22:59:00.341710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.554 [2024-04-15 22:59:00.341726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.554 [2024-04-15 22:59:00.341733] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.554 [2024-04-15 22:59:00.341741] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.554 [2024-04-15 22:59:00.341754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.554 qpair failed and we were unable to recover it. 00:32:15.554 [2024-04-15 22:59:00.351677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.554 [2024-04-15 22:59:00.351745] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.554 [2024-04-15 22:59:00.351761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.554 [2024-04-15 22:59:00.351768] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.554 [2024-04-15 22:59:00.351775] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.555 [2024-04-15 22:59:00.351788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.555 qpair failed and we were unable to recover it. 00:32:15.818 [2024-04-15 22:59:00.361602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.818 [2024-04-15 22:59:00.361672] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.818 [2024-04-15 22:59:00.361687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.818 [2024-04-15 22:59:00.361694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.818 [2024-04-15 22:59:00.361701] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.818 [2024-04-15 22:59:00.361715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.818 qpair failed and we were unable to recover it. 00:32:15.818 [2024-04-15 22:59:00.371730] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.818 [2024-04-15 22:59:00.371802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.818 [2024-04-15 22:59:00.371817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.818 [2024-04-15 22:59:00.371824] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.818 [2024-04-15 22:59:00.371831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.818 [2024-04-15 22:59:00.371844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.818 qpair failed and we were unable to recover it. 00:32:15.818 [2024-04-15 22:59:00.381669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.818 [2024-04-15 22:59:00.381742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.818 [2024-04-15 22:59:00.381756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.818 [2024-04-15 22:59:00.381764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.818 [2024-04-15 22:59:00.381775] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.818 [2024-04-15 22:59:00.381789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.818 qpair failed and we were unable to recover it. 00:32:15.818 [2024-04-15 22:59:00.391780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.818 [2024-04-15 22:59:00.391855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.818 [2024-04-15 22:59:00.391869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.818 [2024-04-15 22:59:00.391876] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.818 [2024-04-15 22:59:00.391883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.818 [2024-04-15 22:59:00.391896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.818 qpair failed and we were unable to recover it. 00:32:15.818 [2024-04-15 22:59:00.401841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.818 [2024-04-15 22:59:00.401920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.818 [2024-04-15 22:59:00.401935] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.818 [2024-04-15 22:59:00.401942] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.818 [2024-04-15 22:59:00.401948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.818 [2024-04-15 22:59:00.401961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.818 qpair failed and we were unable to recover it. 00:32:15.818 [2024-04-15 22:59:00.411875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.818 [2024-04-15 22:59:00.411992] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.818 [2024-04-15 22:59:00.412007] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.818 [2024-04-15 22:59:00.412014] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.818 [2024-04-15 22:59:00.412021] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.818 [2024-04-15 22:59:00.412034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.818 qpair failed and we were unable to recover it. 00:32:15.818 [2024-04-15 22:59:00.421756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.818 [2024-04-15 22:59:00.421824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.818 [2024-04-15 22:59:00.421840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.818 [2024-04-15 22:59:00.421847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.818 [2024-04-15 22:59:00.421854] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.818 [2024-04-15 22:59:00.421867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.818 qpair failed and we were unable to recover it. 00:32:15.818 [2024-04-15 22:59:00.431906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.818 [2024-04-15 22:59:00.431978] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.818 [2024-04-15 22:59:00.431993] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.818 [2024-04-15 22:59:00.432000] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.818 [2024-04-15 22:59:00.432007] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.818 [2024-04-15 22:59:00.432020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.818 qpair failed and we were unable to recover it. 00:32:15.818 [2024-04-15 22:59:00.441932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.818 [2024-04-15 22:59:00.442002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.818 [2024-04-15 22:59:00.442017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.818 [2024-04-15 22:59:00.442024] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.818 [2024-04-15 22:59:00.442030] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.818 [2024-04-15 22:59:00.442044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.818 qpair failed and we were unable to recover it. 00:32:15.818 [2024-04-15 22:59:00.451841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.818 [2024-04-15 22:59:00.451907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.818 [2024-04-15 22:59:00.451924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.818 [2024-04-15 22:59:00.451931] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.818 [2024-04-15 22:59:00.451937] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.818 [2024-04-15 22:59:00.451951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.818 qpair failed and we were unable to recover it. 00:32:15.818 [2024-04-15 22:59:00.461968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.818 [2024-04-15 22:59:00.462037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.818 [2024-04-15 22:59:00.462052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.818 [2024-04-15 22:59:00.462060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.818 [2024-04-15 22:59:00.462066] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.818 [2024-04-15 22:59:00.462079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.818 qpair failed and we were unable to recover it. 00:32:15.818 [2024-04-15 22:59:00.471998] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.818 [2024-04-15 22:59:00.472097] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.818 [2024-04-15 22:59:00.472113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.819 [2024-04-15 22:59:00.472123] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.819 [2024-04-15 22:59:00.472130] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.819 [2024-04-15 22:59:00.472143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.819 qpair failed and we were unable to recover it. 00:32:15.819 [2024-04-15 22:59:00.482054] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.819 [2024-04-15 22:59:00.482123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.819 [2024-04-15 22:59:00.482138] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.819 [2024-04-15 22:59:00.482146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.819 [2024-04-15 22:59:00.482152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.819 [2024-04-15 22:59:00.482165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.819 qpair failed and we were unable to recover it. 00:32:15.819 [2024-04-15 22:59:00.491955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.819 [2024-04-15 22:59:00.492022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.819 [2024-04-15 22:59:00.492038] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.819 [2024-04-15 22:59:00.492046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.819 [2024-04-15 22:59:00.492052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.819 [2024-04-15 22:59:00.492067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.819 qpair failed and we were unable to recover it. 00:32:15.819 [2024-04-15 22:59:00.502121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.819 [2024-04-15 22:59:00.502195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.819 [2024-04-15 22:59:00.502211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.819 [2024-04-15 22:59:00.502218] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.819 [2024-04-15 22:59:00.502225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.819 [2024-04-15 22:59:00.502238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.819 qpair failed and we were unable to recover it. 00:32:15.819 [2024-04-15 22:59:00.512100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.819 [2024-04-15 22:59:00.512171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.819 [2024-04-15 22:59:00.512186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.819 [2024-04-15 22:59:00.512193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.819 [2024-04-15 22:59:00.512200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.819 [2024-04-15 22:59:00.512214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.819 qpair failed and we were unable to recover it. 00:32:15.819 [2024-04-15 22:59:00.522040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.819 [2024-04-15 22:59:00.522111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.819 [2024-04-15 22:59:00.522128] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.819 [2024-04-15 22:59:00.522139] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.819 [2024-04-15 22:59:00.522147] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.819 [2024-04-15 22:59:00.522160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.819 qpair failed and we were unable to recover it. 00:32:15.819 [2024-04-15 22:59:00.532087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.819 [2024-04-15 22:59:00.532187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.819 [2024-04-15 22:59:00.532205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.819 [2024-04-15 22:59:00.532213] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.819 [2024-04-15 22:59:00.532219] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.819 [2024-04-15 22:59:00.532233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.819 qpair failed and we were unable to recover it. 00:32:15.819 [2024-04-15 22:59:00.542256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.819 [2024-04-15 22:59:00.542369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.819 [2024-04-15 22:59:00.542384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.819 [2024-04-15 22:59:00.542392] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.819 [2024-04-15 22:59:00.542398] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.819 [2024-04-15 22:59:00.542412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.819 qpair failed and we were unable to recover it. 00:32:15.819 [2024-04-15 22:59:00.552248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.819 [2024-04-15 22:59:00.552315] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.819 [2024-04-15 22:59:00.552330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.819 [2024-04-15 22:59:00.552338] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.819 [2024-04-15 22:59:00.552344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.819 [2024-04-15 22:59:00.552357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.819 qpair failed and we were unable to recover it. 00:32:15.819 [2024-04-15 22:59:00.562280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.819 [2024-04-15 22:59:00.562361] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.819 [2024-04-15 22:59:00.562377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.819 [2024-04-15 22:59:00.562388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.819 [2024-04-15 22:59:00.562395] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.819 [2024-04-15 22:59:00.562408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.819 qpair failed and we were unable to recover it. 00:32:15.819 [2024-04-15 22:59:00.572305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.819 [2024-04-15 22:59:00.572372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.819 [2024-04-15 22:59:00.572387] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.819 [2024-04-15 22:59:00.572394] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.819 [2024-04-15 22:59:00.572401] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.819 [2024-04-15 22:59:00.572414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.819 qpair failed and we were unable to recover it. 00:32:15.819 [2024-04-15 22:59:00.582367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.819 [2024-04-15 22:59:00.582487] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.819 [2024-04-15 22:59:00.582503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.819 [2024-04-15 22:59:00.582510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.819 [2024-04-15 22:59:00.582516] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.819 [2024-04-15 22:59:00.582530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.819 qpair failed and we were unable to recover it. 00:32:15.819 [2024-04-15 22:59:00.592327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.819 [2024-04-15 22:59:00.592406] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.819 [2024-04-15 22:59:00.592421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.819 [2024-04-15 22:59:00.592429] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.819 [2024-04-15 22:59:00.592435] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.819 [2024-04-15 22:59:00.592449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.819 qpair failed and we were unable to recover it. 00:32:15.819 [2024-04-15 22:59:00.602365] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.819 [2024-04-15 22:59:00.602434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.819 [2024-04-15 22:59:00.602449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.819 [2024-04-15 22:59:00.602456] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.819 [2024-04-15 22:59:00.602463] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.820 [2024-04-15 22:59:00.602476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.820 qpair failed and we were unable to recover it. 00:32:15.820 [2024-04-15 22:59:00.612363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.820 [2024-04-15 22:59:00.612432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.820 [2024-04-15 22:59:00.612447] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.820 [2024-04-15 22:59:00.612454] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.820 [2024-04-15 22:59:00.612460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.820 [2024-04-15 22:59:00.612473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.820 qpair failed and we were unable to recover it. 00:32:15.820 [2024-04-15 22:59:00.622488] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.820 [2024-04-15 22:59:00.622562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.820 [2024-04-15 22:59:00.622578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.820 [2024-04-15 22:59:00.622585] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.820 [2024-04-15 22:59:00.622592] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:15.820 [2024-04-15 22:59:00.622605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.820 qpair failed and we were unable to recover it. 00:32:16.083 [2024-04-15 22:59:00.632459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.083 [2024-04-15 22:59:00.632528] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.083 [2024-04-15 22:59:00.632547] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.083 [2024-04-15 22:59:00.632555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.083 [2024-04-15 22:59:00.632561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.083 [2024-04-15 22:59:00.632575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.083 qpair failed and we were unable to recover it. 00:32:16.083 [2024-04-15 22:59:00.642523] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.083 [2024-04-15 22:59:00.642638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.083 [2024-04-15 22:59:00.642654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.083 [2024-04-15 22:59:00.642661] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.083 [2024-04-15 22:59:00.642668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.083 [2024-04-15 22:59:00.642681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.083 qpair failed and we were unable to recover it. 00:32:16.083 [2024-04-15 22:59:00.652524] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.083 [2024-04-15 22:59:00.652598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.083 [2024-04-15 22:59:00.652613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.083 [2024-04-15 22:59:00.652628] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.083 [2024-04-15 22:59:00.652635] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.083 [2024-04-15 22:59:00.652648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.083 qpair failed and we were unable to recover it. 00:32:16.083 [2024-04-15 22:59:00.662567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.083 [2024-04-15 22:59:00.662637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.083 [2024-04-15 22:59:00.662652] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.083 [2024-04-15 22:59:00.662660] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.083 [2024-04-15 22:59:00.662666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.083 [2024-04-15 22:59:00.662680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.083 qpair failed and we were unable to recover it. 00:32:16.083 [2024-04-15 22:59:00.672572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.083 [2024-04-15 22:59:00.672686] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.083 [2024-04-15 22:59:00.672702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.084 [2024-04-15 22:59:00.672709] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.084 [2024-04-15 22:59:00.672716] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.084 [2024-04-15 22:59:00.672729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.084 qpair failed and we were unable to recover it. 00:32:16.084 [2024-04-15 22:59:00.682581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.084 [2024-04-15 22:59:00.682651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.084 [2024-04-15 22:59:00.682668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.084 [2024-04-15 22:59:00.682678] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.084 [2024-04-15 22:59:00.682685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.084 [2024-04-15 22:59:00.682699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.084 qpair failed and we were unable to recover it. 00:32:16.084 [2024-04-15 22:59:00.692623] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.084 [2024-04-15 22:59:00.692696] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.084 [2024-04-15 22:59:00.692712] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.084 [2024-04-15 22:59:00.692720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.084 [2024-04-15 22:59:00.692726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.084 [2024-04-15 22:59:00.692740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.084 qpair failed and we were unable to recover it. 00:32:16.084 [2024-04-15 22:59:00.702650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.084 [2024-04-15 22:59:00.702719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.084 [2024-04-15 22:59:00.702734] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.084 [2024-04-15 22:59:00.702741] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.084 [2024-04-15 22:59:00.702748] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.084 [2024-04-15 22:59:00.702761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.084 qpair failed and we were unable to recover it. 00:32:16.084 [2024-04-15 22:59:00.712571] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.084 [2024-04-15 22:59:00.712652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.084 [2024-04-15 22:59:00.712667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.084 [2024-04-15 22:59:00.712674] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.084 [2024-04-15 22:59:00.712681] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.084 [2024-04-15 22:59:00.712694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.084 qpair failed and we were unable to recover it. 00:32:16.084 [2024-04-15 22:59:00.722691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.084 [2024-04-15 22:59:00.722805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.084 [2024-04-15 22:59:00.722821] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.084 [2024-04-15 22:59:00.722828] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.084 [2024-04-15 22:59:00.722835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.084 [2024-04-15 22:59:00.722848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.084 qpair failed and we were unable to recover it. 00:32:16.084 [2024-04-15 22:59:00.732726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.084 [2024-04-15 22:59:00.732795] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.084 [2024-04-15 22:59:00.732810] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.084 [2024-04-15 22:59:00.732818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.084 [2024-04-15 22:59:00.732824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.084 [2024-04-15 22:59:00.732838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.084 qpair failed and we were unable to recover it. 00:32:16.084 [2024-04-15 22:59:00.742782] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.084 [2024-04-15 22:59:00.742849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.084 [2024-04-15 22:59:00.742864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.084 [2024-04-15 22:59:00.742875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.084 [2024-04-15 22:59:00.742882] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.084 [2024-04-15 22:59:00.742896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.084 qpair failed and we were unable to recover it. 00:32:16.084 [2024-04-15 22:59:00.752799] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.084 [2024-04-15 22:59:00.752877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.084 [2024-04-15 22:59:00.752892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.084 [2024-04-15 22:59:00.752899] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.084 [2024-04-15 22:59:00.752907] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.084 [2024-04-15 22:59:00.752921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.084 qpair failed and we were unable to recover it. 00:32:16.084 [2024-04-15 22:59:00.762816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.084 [2024-04-15 22:59:00.762883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.084 [2024-04-15 22:59:00.762898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.084 [2024-04-15 22:59:00.762905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.084 [2024-04-15 22:59:00.762914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.084 [2024-04-15 22:59:00.762928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.084 qpair failed and we were unable to recover it. 00:32:16.084 [2024-04-15 22:59:00.772721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.084 [2024-04-15 22:59:00.772789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.084 [2024-04-15 22:59:00.772804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.084 [2024-04-15 22:59:00.772811] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.084 [2024-04-15 22:59:00.772817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.084 [2024-04-15 22:59:00.772831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.084 qpair failed and we were unable to recover it. 00:32:16.084 [2024-04-15 22:59:00.782832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.084 [2024-04-15 22:59:00.782905] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.084 [2024-04-15 22:59:00.782919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.084 [2024-04-15 22:59:00.782927] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.084 [2024-04-15 22:59:00.782933] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.084 [2024-04-15 22:59:00.782946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.084 qpair failed and we were unable to recover it. 00:32:16.084 [2024-04-15 22:59:00.792893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.084 [2024-04-15 22:59:00.792969] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.084 [2024-04-15 22:59:00.792984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.084 [2024-04-15 22:59:00.792991] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.084 [2024-04-15 22:59:00.792998] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.085 [2024-04-15 22:59:00.793011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.085 qpair failed and we were unable to recover it. 00:32:16.085 [2024-04-15 22:59:00.802914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.085 [2024-04-15 22:59:00.802980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.085 [2024-04-15 22:59:00.802996] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.085 [2024-04-15 22:59:00.803004] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.085 [2024-04-15 22:59:00.803012] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.085 [2024-04-15 22:59:00.803026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.085 qpair failed and we were unable to recover it. 00:32:16.085 [2024-04-15 22:59:00.812952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.085 [2024-04-15 22:59:00.813019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.085 [2024-04-15 22:59:00.813034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.085 [2024-04-15 22:59:00.813041] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.085 [2024-04-15 22:59:00.813047] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.085 [2024-04-15 22:59:00.813061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.085 qpair failed and we were unable to recover it. 00:32:16.085 [2024-04-15 22:59:00.822993] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.085 [2024-04-15 22:59:00.823063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.085 [2024-04-15 22:59:00.823078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.085 [2024-04-15 22:59:00.823086] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.085 [2024-04-15 22:59:00.823093] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.085 [2024-04-15 22:59:00.823106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.085 qpair failed and we were unable to recover it. 00:32:16.085 [2024-04-15 22:59:00.832988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.085 [2024-04-15 22:59:00.833061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.085 [2024-04-15 22:59:00.833076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.085 [2024-04-15 22:59:00.833086] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.085 [2024-04-15 22:59:00.833093] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.085 [2024-04-15 22:59:00.833107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.085 qpair failed and we were unable to recover it. 00:32:16.085 [2024-04-15 22:59:00.843045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.085 [2024-04-15 22:59:00.843153] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.085 [2024-04-15 22:59:00.843169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.085 [2024-04-15 22:59:00.843176] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.085 [2024-04-15 22:59:00.843183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.085 [2024-04-15 22:59:00.843196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.085 qpair failed and we were unable to recover it. 00:32:16.085 [2024-04-15 22:59:00.853069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.085 [2024-04-15 22:59:00.853139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.085 [2024-04-15 22:59:00.853154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.085 [2024-04-15 22:59:00.853162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.085 [2024-04-15 22:59:00.853168] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.085 [2024-04-15 22:59:00.853182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.085 qpair failed and we were unable to recover it. 00:32:16.085 [2024-04-15 22:59:00.863094] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.085 [2024-04-15 22:59:00.863163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.085 [2024-04-15 22:59:00.863178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.085 [2024-04-15 22:59:00.863185] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.085 [2024-04-15 22:59:00.863191] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.085 [2024-04-15 22:59:00.863204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.085 qpair failed and we were unable to recover it. 00:32:16.085 [2024-04-15 22:59:00.873113] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.085 [2024-04-15 22:59:00.873185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.085 [2024-04-15 22:59:00.873201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.085 [2024-04-15 22:59:00.873208] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.085 [2024-04-15 22:59:00.873214] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.085 [2024-04-15 22:59:00.873228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.085 qpair failed and we were unable to recover it. 00:32:16.085 [2024-04-15 22:59:00.883142] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.085 [2024-04-15 22:59:00.883207] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.085 [2024-04-15 22:59:00.883222] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.085 [2024-04-15 22:59:00.883229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.085 [2024-04-15 22:59:00.883236] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.085 [2024-04-15 22:59:00.883249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.085 qpair failed and we were unable to recover it. 00:32:16.349 [2024-04-15 22:59:00.893166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.349 [2024-04-15 22:59:00.893233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.349 [2024-04-15 22:59:00.893248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.349 [2024-04-15 22:59:00.893255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.349 [2024-04-15 22:59:00.893262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.349 [2024-04-15 22:59:00.893275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.349 qpair failed and we were unable to recover it. 00:32:16.349 [2024-04-15 22:59:00.903202] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.349 [2024-04-15 22:59:00.903272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.349 [2024-04-15 22:59:00.903288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.349 [2024-04-15 22:59:00.903295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.349 [2024-04-15 22:59:00.903301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.349 [2024-04-15 22:59:00.903315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.349 qpair failed and we were unable to recover it. 00:32:16.349 [2024-04-15 22:59:00.913210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.349 [2024-04-15 22:59:00.913322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.349 [2024-04-15 22:59:00.913348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.349 [2024-04-15 22:59:00.913357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.349 [2024-04-15 22:59:00.913364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.349 [2024-04-15 22:59:00.913382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.349 qpair failed and we were unable to recover it. 00:32:16.349 [2024-04-15 22:59:00.923266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.349 [2024-04-15 22:59:00.923340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.350 [2024-04-15 22:59:00.923370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.350 [2024-04-15 22:59:00.923380] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.350 [2024-04-15 22:59:00.923387] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.350 [2024-04-15 22:59:00.923405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.350 qpair failed and we were unable to recover it. 00:32:16.350 [2024-04-15 22:59:00.933304] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.350 [2024-04-15 22:59:00.933369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.350 [2024-04-15 22:59:00.933386] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.350 [2024-04-15 22:59:00.933394] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.350 [2024-04-15 22:59:00.933400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.350 [2024-04-15 22:59:00.933415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.350 qpair failed and we were unable to recover it. 00:32:16.350 [2024-04-15 22:59:00.943336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.350 [2024-04-15 22:59:00.943418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.350 [2024-04-15 22:59:00.943434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.350 [2024-04-15 22:59:00.943442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.350 [2024-04-15 22:59:00.943449] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.350 [2024-04-15 22:59:00.943463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.350 qpair failed and we were unable to recover it. 00:32:16.350 [2024-04-15 22:59:00.953366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.350 [2024-04-15 22:59:00.953435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.350 [2024-04-15 22:59:00.953451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.350 [2024-04-15 22:59:00.953458] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.350 [2024-04-15 22:59:00.953465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.350 [2024-04-15 22:59:00.953478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.350 qpair failed and we were unable to recover it. 00:32:16.350 [2024-04-15 22:59:00.963385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.350 [2024-04-15 22:59:00.963453] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.350 [2024-04-15 22:59:00.963468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.350 [2024-04-15 22:59:00.963476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.350 [2024-04-15 22:59:00.963482] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.350 [2024-04-15 22:59:00.963497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.350 qpair failed and we were unable to recover it. 00:32:16.350 [2024-04-15 22:59:00.973396] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.350 [2024-04-15 22:59:00.973511] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.350 [2024-04-15 22:59:00.973528] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.350 [2024-04-15 22:59:00.973535] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.350 [2024-04-15 22:59:00.973546] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.350 [2024-04-15 22:59:00.973561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.350 qpair failed and we were unable to recover it. 00:32:16.350 [2024-04-15 22:59:00.983432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.350 [2024-04-15 22:59:00.983498] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.350 [2024-04-15 22:59:00.983513] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.350 [2024-04-15 22:59:00.983521] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.350 [2024-04-15 22:59:00.983527] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.350 [2024-04-15 22:59:00.983540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.350 qpair failed and we were unable to recover it. 00:32:16.350 [2024-04-15 22:59:00.993562] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.350 [2024-04-15 22:59:00.993631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.350 [2024-04-15 22:59:00.993647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.350 [2024-04-15 22:59:00.993654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.350 [2024-04-15 22:59:00.993660] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.350 [2024-04-15 22:59:00.993673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.350 qpair failed and we were unable to recover it. 00:32:16.350 [2024-04-15 22:59:01.003386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.350 [2024-04-15 22:59:01.003457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.350 [2024-04-15 22:59:01.003473] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.350 [2024-04-15 22:59:01.003480] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.350 [2024-04-15 22:59:01.003486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.350 [2024-04-15 22:59:01.003500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.350 qpair failed and we were unable to recover it. 00:32:16.350 [2024-04-15 22:59:01.013524] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.350 [2024-04-15 22:59:01.013643] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.350 [2024-04-15 22:59:01.013663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.350 [2024-04-15 22:59:01.013670] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.350 [2024-04-15 22:59:01.013677] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.350 [2024-04-15 22:59:01.013690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.350 qpair failed and we were unable to recover it. 00:32:16.350 [2024-04-15 22:59:01.023557] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.350 [2024-04-15 22:59:01.023629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.350 [2024-04-15 22:59:01.023645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.350 [2024-04-15 22:59:01.023652] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.350 [2024-04-15 22:59:01.023658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.350 [2024-04-15 22:59:01.023672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.350 qpair failed and we were unable to recover it. 00:32:16.350 [2024-04-15 22:59:01.033564] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.350 [2024-04-15 22:59:01.033637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.350 [2024-04-15 22:59:01.033652] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.350 [2024-04-15 22:59:01.033660] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.350 [2024-04-15 22:59:01.033666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.350 [2024-04-15 22:59:01.033680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.350 qpair failed and we were unable to recover it. 00:32:16.350 [2024-04-15 22:59:01.043612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.350 [2024-04-15 22:59:01.043746] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.350 [2024-04-15 22:59:01.043762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.350 [2024-04-15 22:59:01.043769] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.350 [2024-04-15 22:59:01.043776] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.350 [2024-04-15 22:59:01.043790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.350 qpair failed and we were unable to recover it. 00:32:16.350 [2024-04-15 22:59:01.053635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.350 [2024-04-15 22:59:01.053698] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.350 [2024-04-15 22:59:01.053713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.351 [2024-04-15 22:59:01.053721] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.351 [2024-04-15 22:59:01.053727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.351 [2024-04-15 22:59:01.053740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.351 qpair failed and we were unable to recover it. 00:32:16.351 [2024-04-15 22:59:01.063675] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.351 [2024-04-15 22:59:01.063750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.351 [2024-04-15 22:59:01.063765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.351 [2024-04-15 22:59:01.063773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.351 [2024-04-15 22:59:01.063779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.351 [2024-04-15 22:59:01.063792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.351 qpair failed and we were unable to recover it. 00:32:16.351 [2024-04-15 22:59:01.073673] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.351 [2024-04-15 22:59:01.073749] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.351 [2024-04-15 22:59:01.073765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.351 [2024-04-15 22:59:01.073772] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.351 [2024-04-15 22:59:01.073779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.351 [2024-04-15 22:59:01.073792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.351 qpair failed and we were unable to recover it. 00:32:16.351 [2024-04-15 22:59:01.083694] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.351 [2024-04-15 22:59:01.083764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.351 [2024-04-15 22:59:01.083780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.351 [2024-04-15 22:59:01.083787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.351 [2024-04-15 22:59:01.083794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.351 [2024-04-15 22:59:01.083808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.351 qpair failed and we were unable to recover it. 00:32:16.351 [2024-04-15 22:59:01.093744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.351 [2024-04-15 22:59:01.093812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.351 [2024-04-15 22:59:01.093828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.351 [2024-04-15 22:59:01.093835] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.351 [2024-04-15 22:59:01.093841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.351 [2024-04-15 22:59:01.093856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.351 qpair failed and we were unable to recover it. 00:32:16.351 [2024-04-15 22:59:01.103759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.351 [2024-04-15 22:59:01.103832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.351 [2024-04-15 22:59:01.103851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.351 [2024-04-15 22:59:01.103858] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.351 [2024-04-15 22:59:01.103865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.351 [2024-04-15 22:59:01.103879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.351 qpair failed and we were unable to recover it. 00:32:16.351 [2024-04-15 22:59:01.113726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.351 [2024-04-15 22:59:01.113803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.351 [2024-04-15 22:59:01.113819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.351 [2024-04-15 22:59:01.113826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.351 [2024-04-15 22:59:01.113832] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.351 [2024-04-15 22:59:01.113846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.351 qpair failed and we were unable to recover it. 00:32:16.351 [2024-04-15 22:59:01.123840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.351 [2024-04-15 22:59:01.123906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.351 [2024-04-15 22:59:01.123922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.351 [2024-04-15 22:59:01.123929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.351 [2024-04-15 22:59:01.123936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.351 [2024-04-15 22:59:01.123950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.351 qpair failed and we were unable to recover it. 00:32:16.351 [2024-04-15 22:59:01.133852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.351 [2024-04-15 22:59:01.133919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.351 [2024-04-15 22:59:01.133934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.351 [2024-04-15 22:59:01.133942] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.351 [2024-04-15 22:59:01.133948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.351 [2024-04-15 22:59:01.133962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.351 qpair failed and we were unable to recover it. 00:32:16.351 [2024-04-15 22:59:01.143879] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.351 [2024-04-15 22:59:01.143996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.351 [2024-04-15 22:59:01.144012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.351 [2024-04-15 22:59:01.144019] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.351 [2024-04-15 22:59:01.144025] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.351 [2024-04-15 22:59:01.144043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.351 qpair failed and we were unable to recover it. 00:32:16.351 [2024-04-15 22:59:01.153877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.351 [2024-04-15 22:59:01.153954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.351 [2024-04-15 22:59:01.153969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.351 [2024-04-15 22:59:01.153977] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.351 [2024-04-15 22:59:01.153983] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.351 [2024-04-15 22:59:01.153997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.351 qpair failed and we were unable to recover it. 00:32:16.614 [2024-04-15 22:59:01.163913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.614 [2024-04-15 22:59:01.163986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.614 [2024-04-15 22:59:01.164001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.614 [2024-04-15 22:59:01.164009] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.614 [2024-04-15 22:59:01.164015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.614 [2024-04-15 22:59:01.164029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.614 qpair failed and we were unable to recover it. 00:32:16.614 [2024-04-15 22:59:01.173961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.614 [2024-04-15 22:59:01.174030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.614 [2024-04-15 22:59:01.174045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.614 [2024-04-15 22:59:01.174052] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.614 [2024-04-15 22:59:01.174059] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.614 [2024-04-15 22:59:01.174073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.614 qpair failed and we were unable to recover it. 00:32:16.614 [2024-04-15 22:59:01.183990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.615 [2024-04-15 22:59:01.184058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.615 [2024-04-15 22:59:01.184073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.615 [2024-04-15 22:59:01.184081] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.615 [2024-04-15 22:59:01.184087] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.615 [2024-04-15 22:59:01.184101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.615 qpair failed and we were unable to recover it. 00:32:16.615 [2024-04-15 22:59:01.194020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.615 [2024-04-15 22:59:01.194094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.615 [2024-04-15 22:59:01.194113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.615 [2024-04-15 22:59:01.194121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.615 [2024-04-15 22:59:01.194128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.615 [2024-04-15 22:59:01.194142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.615 qpair failed and we were unable to recover it. 00:32:16.615 [2024-04-15 22:59:01.204057] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.615 [2024-04-15 22:59:01.204124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.615 [2024-04-15 22:59:01.204139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.615 [2024-04-15 22:59:01.204146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.615 [2024-04-15 22:59:01.204153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.615 [2024-04-15 22:59:01.204166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.615 qpair failed and we were unable to recover it. 00:32:16.615 [2024-04-15 22:59:01.213989] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.615 [2024-04-15 22:59:01.214062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.615 [2024-04-15 22:59:01.214078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.615 [2024-04-15 22:59:01.214085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.615 [2024-04-15 22:59:01.214092] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.615 [2024-04-15 22:59:01.214105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.615 qpair failed and we were unable to recover it. 00:32:16.615 [2024-04-15 22:59:01.224104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.615 [2024-04-15 22:59:01.224172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.615 [2024-04-15 22:59:01.224188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.615 [2024-04-15 22:59:01.224195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.615 [2024-04-15 22:59:01.224201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.615 [2024-04-15 22:59:01.224215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.615 qpair failed and we were unable to recover it. 00:32:16.615 [2024-04-15 22:59:01.234144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.615 [2024-04-15 22:59:01.234229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.615 [2024-04-15 22:59:01.234245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.615 [2024-04-15 22:59:01.234252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.615 [2024-04-15 22:59:01.234259] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.615 [2024-04-15 22:59:01.234276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.615 qpair failed and we were unable to recover it. 00:32:16.615 [2024-04-15 22:59:01.244148] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.615 [2024-04-15 22:59:01.244225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.615 [2024-04-15 22:59:01.244250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.615 [2024-04-15 22:59:01.244259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.615 [2024-04-15 22:59:01.244267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.615 [2024-04-15 22:59:01.244286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.615 qpair failed and we were unable to recover it. 00:32:16.615 [2024-04-15 22:59:01.254170] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.615 [2024-04-15 22:59:01.254252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.615 [2024-04-15 22:59:01.254278] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.615 [2024-04-15 22:59:01.254286] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.615 [2024-04-15 22:59:01.254294] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.615 [2024-04-15 22:59:01.254313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.615 qpair failed and we were unable to recover it. 00:32:16.615 [2024-04-15 22:59:01.264154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.615 [2024-04-15 22:59:01.264248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.615 [2024-04-15 22:59:01.264265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.615 [2024-04-15 22:59:01.264273] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.615 [2024-04-15 22:59:01.264280] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.615 [2024-04-15 22:59:01.264295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.615 qpair failed and we were unable to recover it. 00:32:16.615 [2024-04-15 22:59:01.274276] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.615 [2024-04-15 22:59:01.274391] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.615 [2024-04-15 22:59:01.274416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.615 [2024-04-15 22:59:01.274425] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.615 [2024-04-15 22:59:01.274433] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.615 [2024-04-15 22:59:01.274451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.615 qpair failed and we were unable to recover it. 00:32:16.615 [2024-04-15 22:59:01.284333] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.615 [2024-04-15 22:59:01.284448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.615 [2024-04-15 22:59:01.284471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.615 [2024-04-15 22:59:01.284479] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.615 [2024-04-15 22:59:01.284485] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.615 [2024-04-15 22:59:01.284500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.615 qpair failed and we were unable to recover it. 00:32:16.615 [2024-04-15 22:59:01.294182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.615 [2024-04-15 22:59:01.294255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.615 [2024-04-15 22:59:01.294271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.615 [2024-04-15 22:59:01.294279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.615 [2024-04-15 22:59:01.294285] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.615 [2024-04-15 22:59:01.294299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.615 qpair failed and we were unable to recover it. 00:32:16.615 [2024-04-15 22:59:01.304392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.615 [2024-04-15 22:59:01.304481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.615 [2024-04-15 22:59:01.304498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.615 [2024-04-15 22:59:01.304506] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.615 [2024-04-15 22:59:01.304513] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.615 [2024-04-15 22:59:01.304527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.615 qpair failed and we were unable to recover it. 00:32:16.615 [2024-04-15 22:59:01.314360] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.615 [2024-04-15 22:59:01.314430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.615 [2024-04-15 22:59:01.314446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.615 [2024-04-15 22:59:01.314453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.616 [2024-04-15 22:59:01.314460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.616 [2024-04-15 22:59:01.314473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.616 qpair failed and we were unable to recover it. 00:32:16.616 [2024-04-15 22:59:01.324379] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.616 [2024-04-15 22:59:01.324497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.616 [2024-04-15 22:59:01.324514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.616 [2024-04-15 22:59:01.324521] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.616 [2024-04-15 22:59:01.324528] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.616 [2024-04-15 22:59:01.324554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.616 qpair failed and we were unable to recover it. 00:32:16.616 [2024-04-15 22:59:01.334409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.616 [2024-04-15 22:59:01.334478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.616 [2024-04-15 22:59:01.334494] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.616 [2024-04-15 22:59:01.334501] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.616 [2024-04-15 22:59:01.334508] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.616 [2024-04-15 22:59:01.334521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.616 qpair failed and we were unable to recover it. 00:32:16.616 [2024-04-15 22:59:01.344443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.616 [2024-04-15 22:59:01.344511] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.616 [2024-04-15 22:59:01.344526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.616 [2024-04-15 22:59:01.344533] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.616 [2024-04-15 22:59:01.344540] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.616 [2024-04-15 22:59:01.344559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.616 qpair failed and we were unable to recover it. 00:32:16.616 [2024-04-15 22:59:01.354461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.616 [2024-04-15 22:59:01.354535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.616 [2024-04-15 22:59:01.354557] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.616 [2024-04-15 22:59:01.354565] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.616 [2024-04-15 22:59:01.354571] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.616 [2024-04-15 22:59:01.354586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.616 qpair failed and we were unable to recover it. 00:32:16.616 [2024-04-15 22:59:01.364498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.616 [2024-04-15 22:59:01.364573] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.616 [2024-04-15 22:59:01.364589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.616 [2024-04-15 22:59:01.364597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.616 [2024-04-15 22:59:01.364604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.616 [2024-04-15 22:59:01.364618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.616 qpair failed and we were unable to recover it. 00:32:16.616 [2024-04-15 22:59:01.374529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.616 [2024-04-15 22:59:01.374605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.616 [2024-04-15 22:59:01.374624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.616 [2024-04-15 22:59:01.374632] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.616 [2024-04-15 22:59:01.374639] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.616 [2024-04-15 22:59:01.374653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.616 qpair failed and we were unable to recover it. 00:32:16.616 [2024-04-15 22:59:01.384572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.616 [2024-04-15 22:59:01.384681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.616 [2024-04-15 22:59:01.384697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.616 [2024-04-15 22:59:01.384704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.616 [2024-04-15 22:59:01.384711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.616 [2024-04-15 22:59:01.384725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.616 qpair failed and we were unable to recover it. 00:32:16.616 [2024-04-15 22:59:01.394585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.616 [2024-04-15 22:59:01.394656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.616 [2024-04-15 22:59:01.394671] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.616 [2024-04-15 22:59:01.394678] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.616 [2024-04-15 22:59:01.394685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.616 [2024-04-15 22:59:01.394698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.616 qpair failed and we were unable to recover it. 00:32:16.616 [2024-04-15 22:59:01.404633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.616 [2024-04-15 22:59:01.404703] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.616 [2024-04-15 22:59:01.404718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.616 [2024-04-15 22:59:01.404725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.616 [2024-04-15 22:59:01.404732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.616 [2024-04-15 22:59:01.404745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.616 qpair failed and we were unable to recover it. 00:32:16.616 [2024-04-15 22:59:01.414618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.616 [2024-04-15 22:59:01.414688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.616 [2024-04-15 22:59:01.414704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.616 [2024-04-15 22:59:01.414711] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.616 [2024-04-15 22:59:01.414718] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.616 [2024-04-15 22:59:01.414735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.616 qpair failed and we were unable to recover it. 00:32:16.879 [2024-04-15 22:59:01.424575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.879 [2024-04-15 22:59:01.424650] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.879 [2024-04-15 22:59:01.424665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.879 [2024-04-15 22:59:01.424672] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.879 [2024-04-15 22:59:01.424679] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.879 [2024-04-15 22:59:01.424693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.879 qpair failed and we were unable to recover it. 00:32:16.879 [2024-04-15 22:59:01.434715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.879 [2024-04-15 22:59:01.434790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.879 [2024-04-15 22:59:01.434805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.879 [2024-04-15 22:59:01.434813] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.879 [2024-04-15 22:59:01.434819] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.879 [2024-04-15 22:59:01.434834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.879 qpair failed and we were unable to recover it. 00:32:16.879 [2024-04-15 22:59:01.444801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.879 [2024-04-15 22:59:01.444885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.879 [2024-04-15 22:59:01.444900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.879 [2024-04-15 22:59:01.444908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.879 [2024-04-15 22:59:01.444915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.879 [2024-04-15 22:59:01.444928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.879 qpair failed and we were unable to recover it. 00:32:16.879 [2024-04-15 22:59:01.454785] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.879 [2024-04-15 22:59:01.454852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.879 [2024-04-15 22:59:01.454868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.879 [2024-04-15 22:59:01.454875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.879 [2024-04-15 22:59:01.454882] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.879 [2024-04-15 22:59:01.454896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.879 qpair failed and we were unable to recover it. 00:32:16.879 [2024-04-15 22:59:01.464704] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.880 [2024-04-15 22:59:01.464773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.880 [2024-04-15 22:59:01.464791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.880 [2024-04-15 22:59:01.464799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.880 [2024-04-15 22:59:01.464805] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.880 [2024-04-15 22:59:01.464820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.880 qpair failed and we were unable to recover it. 00:32:16.880 [2024-04-15 22:59:01.474847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.880 [2024-04-15 22:59:01.474921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.880 [2024-04-15 22:59:01.474936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.880 [2024-04-15 22:59:01.474944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.880 [2024-04-15 22:59:01.474950] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.880 [2024-04-15 22:59:01.474964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.880 qpair failed and we were unable to recover it. 00:32:16.880 [2024-04-15 22:59:01.484765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.880 [2024-04-15 22:59:01.484833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.880 [2024-04-15 22:59:01.484848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.880 [2024-04-15 22:59:01.484856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.880 [2024-04-15 22:59:01.484862] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.880 [2024-04-15 22:59:01.484875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.880 qpair failed and we were unable to recover it. 00:32:16.880 [2024-04-15 22:59:01.494780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.880 [2024-04-15 22:59:01.494846] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.880 [2024-04-15 22:59:01.494860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.880 [2024-04-15 22:59:01.494867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.880 [2024-04-15 22:59:01.494874] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.880 [2024-04-15 22:59:01.494887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.880 qpair failed and we were unable to recover it. 00:32:16.880 [2024-04-15 22:59:01.504952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.880 [2024-04-15 22:59:01.505027] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.880 [2024-04-15 22:59:01.505042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.880 [2024-04-15 22:59:01.505049] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.880 [2024-04-15 22:59:01.505056] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.880 [2024-04-15 22:59:01.505073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.880 qpair failed and we were unable to recover it. 00:32:16.880 [2024-04-15 22:59:01.514966] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.880 [2024-04-15 22:59:01.515083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.880 [2024-04-15 22:59:01.515100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.880 [2024-04-15 22:59:01.515108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.880 [2024-04-15 22:59:01.515114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.880 [2024-04-15 22:59:01.515127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.880 qpair failed and we were unable to recover it. 00:32:16.880 [2024-04-15 22:59:01.524958] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.880 [2024-04-15 22:59:01.525038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.880 [2024-04-15 22:59:01.525053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.880 [2024-04-15 22:59:01.525060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.880 [2024-04-15 22:59:01.525068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.880 [2024-04-15 22:59:01.525081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.880 qpair failed and we were unable to recover it. 00:32:16.880 [2024-04-15 22:59:01.535018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.880 [2024-04-15 22:59:01.535126] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.880 [2024-04-15 22:59:01.535141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.880 [2024-04-15 22:59:01.535149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.880 [2024-04-15 22:59:01.535156] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.880 [2024-04-15 22:59:01.535169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.880 qpair failed and we were unable to recover it. 00:32:16.880 [2024-04-15 22:59:01.545040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.880 [2024-04-15 22:59:01.545112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.880 [2024-04-15 22:59:01.545127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.880 [2024-04-15 22:59:01.545134] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.880 [2024-04-15 22:59:01.545141] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.880 [2024-04-15 22:59:01.545155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.880 qpair failed and we were unable to recover it. 00:32:16.880 [2024-04-15 22:59:01.554968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.880 [2024-04-15 22:59:01.555043] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.880 [2024-04-15 22:59:01.555063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.880 [2024-04-15 22:59:01.555070] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.880 [2024-04-15 22:59:01.555078] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.880 [2024-04-15 22:59:01.555092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.880 qpair failed and we were unable to recover it. 00:32:16.880 [2024-04-15 22:59:01.565090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.880 [2024-04-15 22:59:01.565193] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.880 [2024-04-15 22:59:01.565208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.880 [2024-04-15 22:59:01.565216] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.880 [2024-04-15 22:59:01.565222] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.880 [2024-04-15 22:59:01.565236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.880 qpair failed and we were unable to recover it. 00:32:16.880 [2024-04-15 22:59:01.575203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.880 [2024-04-15 22:59:01.575289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.880 [2024-04-15 22:59:01.575305] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.880 [2024-04-15 22:59:01.575312] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.880 [2024-04-15 22:59:01.575319] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.880 [2024-04-15 22:59:01.575333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.880 qpair failed and we were unable to recover it. 00:32:16.880 [2024-04-15 22:59:01.585090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.880 [2024-04-15 22:59:01.585162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.880 [2024-04-15 22:59:01.585177] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.880 [2024-04-15 22:59:01.585184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.880 [2024-04-15 22:59:01.585191] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.880 [2024-04-15 22:59:01.585205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.880 qpair failed and we were unable to recover it. 00:32:16.880 [2024-04-15 22:59:01.595218] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.880 [2024-04-15 22:59:01.595331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.881 [2024-04-15 22:59:01.595346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.881 [2024-04-15 22:59:01.595354] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.881 [2024-04-15 22:59:01.595364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.881 [2024-04-15 22:59:01.595377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.881 qpair failed and we were unable to recover it. 00:32:16.881 [2024-04-15 22:59:01.605203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.881 [2024-04-15 22:59:01.605277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.881 [2024-04-15 22:59:01.605302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.881 [2024-04-15 22:59:01.605312] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.881 [2024-04-15 22:59:01.605319] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.881 [2024-04-15 22:59:01.605337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.881 qpair failed and we were unable to recover it. 00:32:16.881 [2024-04-15 22:59:01.615202] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.881 [2024-04-15 22:59:01.615273] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.881 [2024-04-15 22:59:01.615290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.881 [2024-04-15 22:59:01.615298] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.881 [2024-04-15 22:59:01.615305] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.881 [2024-04-15 22:59:01.615319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.881 qpair failed and we were unable to recover it. 00:32:16.881 [2024-04-15 22:59:01.625333] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.881 [2024-04-15 22:59:01.625443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.881 [2024-04-15 22:59:01.625460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.881 [2024-04-15 22:59:01.625468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.881 [2024-04-15 22:59:01.625475] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.881 [2024-04-15 22:59:01.625489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.881 qpair failed and we were unable to recover it. 00:32:16.881 [2024-04-15 22:59:01.635289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.881 [2024-04-15 22:59:01.635360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.881 [2024-04-15 22:59:01.635376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.881 [2024-04-15 22:59:01.635383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.881 [2024-04-15 22:59:01.635389] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.881 [2024-04-15 22:59:01.635403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.881 qpair failed and we were unable to recover it. 00:32:16.881 [2024-04-15 22:59:01.645203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.881 [2024-04-15 22:59:01.645282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.881 [2024-04-15 22:59:01.645301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.881 [2024-04-15 22:59:01.645311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.881 [2024-04-15 22:59:01.645318] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.881 [2024-04-15 22:59:01.645332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.881 qpair failed and we were unable to recover it. 00:32:16.881 [2024-04-15 22:59:01.655340] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.881 [2024-04-15 22:59:01.655405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.881 [2024-04-15 22:59:01.655421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.881 [2024-04-15 22:59:01.655429] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.881 [2024-04-15 22:59:01.655435] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.881 [2024-04-15 22:59:01.655449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.881 qpair failed and we were unable to recover it. 00:32:16.881 [2024-04-15 22:59:01.665419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.881 [2024-04-15 22:59:01.665530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.881 [2024-04-15 22:59:01.665550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.881 [2024-04-15 22:59:01.665559] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.881 [2024-04-15 22:59:01.665566] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.881 [2024-04-15 22:59:01.665580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.881 qpair failed and we were unable to recover it. 00:32:16.881 [2024-04-15 22:59:01.675423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.881 [2024-04-15 22:59:01.675497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.881 [2024-04-15 22:59:01.675513] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.881 [2024-04-15 22:59:01.675520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.881 [2024-04-15 22:59:01.675527] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.881 [2024-04-15 22:59:01.675541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.881 qpair failed and we were unable to recover it. 00:32:16.881 [2024-04-15 22:59:01.685426] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.881 [2024-04-15 22:59:01.685501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.881 [2024-04-15 22:59:01.685518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.881 [2024-04-15 22:59:01.685527] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.881 [2024-04-15 22:59:01.685538] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:16.881 [2024-04-15 22:59:01.685559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.881 qpair failed and we were unable to recover it. 00:32:17.144 [2024-04-15 22:59:01.695447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.144 [2024-04-15 22:59:01.695540] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.144 [2024-04-15 22:59:01.695561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.144 [2024-04-15 22:59:01.695568] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.144 [2024-04-15 22:59:01.695576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.144 [2024-04-15 22:59:01.695598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.144 qpair failed and we were unable to recover it. 00:32:17.144 [2024-04-15 22:59:01.705435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.144 [2024-04-15 22:59:01.705513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.144 [2024-04-15 22:59:01.705528] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.144 [2024-04-15 22:59:01.705536] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.144 [2024-04-15 22:59:01.705548] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.144 [2024-04-15 22:59:01.705563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.144 qpair failed and we were unable to recover it. 00:32:17.144 [2024-04-15 22:59:01.715385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.144 [2024-04-15 22:59:01.715491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.144 [2024-04-15 22:59:01.715506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.144 [2024-04-15 22:59:01.715514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.144 [2024-04-15 22:59:01.715520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.144 [2024-04-15 22:59:01.715533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.144 qpair failed and we were unable to recover it. 00:32:17.144 [2024-04-15 22:59:01.725539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.144 [2024-04-15 22:59:01.725608] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.144 [2024-04-15 22:59:01.725624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.144 [2024-04-15 22:59:01.725631] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.144 [2024-04-15 22:59:01.725638] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.144 [2024-04-15 22:59:01.725653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.144 qpair failed and we were unable to recover it. 00:32:17.144 [2024-04-15 22:59:01.735563] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.144 [2024-04-15 22:59:01.735631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.144 [2024-04-15 22:59:01.735647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.144 [2024-04-15 22:59:01.735654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.144 [2024-04-15 22:59:01.735661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.144 [2024-04-15 22:59:01.735675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.144 qpair failed and we were unable to recover it. 00:32:17.144 [2024-04-15 22:59:01.745541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.144 [2024-04-15 22:59:01.745603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.144 [2024-04-15 22:59:01.745618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.144 [2024-04-15 22:59:01.745626] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.144 [2024-04-15 22:59:01.745632] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.144 [2024-04-15 22:59:01.745646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.144 qpair failed and we were unable to recover it. 00:32:17.144 [2024-04-15 22:59:01.755605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.144 [2024-04-15 22:59:01.755673] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.144 [2024-04-15 22:59:01.755689] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.144 [2024-04-15 22:59:01.755696] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.144 [2024-04-15 22:59:01.755703] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.144 [2024-04-15 22:59:01.755718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.144 qpair failed and we were unable to recover it. 00:32:17.144 [2024-04-15 22:59:01.765677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.144 [2024-04-15 22:59:01.765744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.144 [2024-04-15 22:59:01.765759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.144 [2024-04-15 22:59:01.765766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.144 [2024-04-15 22:59:01.765773] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.144 [2024-04-15 22:59:01.765787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.144 qpair failed and we were unable to recover it. 00:32:17.144 [2024-04-15 22:59:01.775690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.144 [2024-04-15 22:59:01.775755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.144 [2024-04-15 22:59:01.775770] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.144 [2024-04-15 22:59:01.775777] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.144 [2024-04-15 22:59:01.775787] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.144 [2024-04-15 22:59:01.775801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.144 qpair failed and we were unable to recover it. 00:32:17.144 [2024-04-15 22:59:01.785676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.144 [2024-04-15 22:59:01.785737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.144 [2024-04-15 22:59:01.785752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.144 [2024-04-15 22:59:01.785760] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.144 [2024-04-15 22:59:01.785766] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.144 [2024-04-15 22:59:01.785780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.144 qpair failed and we were unable to recover it. 00:32:17.144 [2024-04-15 22:59:01.795710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.144 [2024-04-15 22:59:01.795785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.144 [2024-04-15 22:59:01.795800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.144 [2024-04-15 22:59:01.795807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.144 [2024-04-15 22:59:01.795814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.144 [2024-04-15 22:59:01.795828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.144 qpair failed and we were unable to recover it. 00:32:17.144 [2024-04-15 22:59:01.805858] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.144 [2024-04-15 22:59:01.805924] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.144 [2024-04-15 22:59:01.805939] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.144 [2024-04-15 22:59:01.805946] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.144 [2024-04-15 22:59:01.805953] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.144 [2024-04-15 22:59:01.805967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.144 qpair failed and we were unable to recover it. 00:32:17.144 [2024-04-15 22:59:01.815780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.144 [2024-04-15 22:59:01.815841] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.144 [2024-04-15 22:59:01.815857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.144 [2024-04-15 22:59:01.815864] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.144 [2024-04-15 22:59:01.815870] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.144 [2024-04-15 22:59:01.815883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.144 qpair failed and we were unable to recover it. 00:32:17.144 [2024-04-15 22:59:01.825674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.145 [2024-04-15 22:59:01.825737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.145 [2024-04-15 22:59:01.825752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.145 [2024-04-15 22:59:01.825760] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.145 [2024-04-15 22:59:01.825766] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.145 [2024-04-15 22:59:01.825780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.145 qpair failed and we were unable to recover it. 00:32:17.145 [2024-04-15 22:59:01.835840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.145 [2024-04-15 22:59:01.835907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.145 [2024-04-15 22:59:01.835922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.145 [2024-04-15 22:59:01.835929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.145 [2024-04-15 22:59:01.835936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.145 [2024-04-15 22:59:01.835949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.145 qpair failed and we were unable to recover it. 00:32:17.145 [2024-04-15 22:59:01.845944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.145 [2024-04-15 22:59:01.846010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.145 [2024-04-15 22:59:01.846026] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.145 [2024-04-15 22:59:01.846033] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.145 [2024-04-15 22:59:01.846040] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.145 [2024-04-15 22:59:01.846053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.145 qpair failed and we were unable to recover it. 00:32:17.145 [2024-04-15 22:59:01.855902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.145 [2024-04-15 22:59:01.855992] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.145 [2024-04-15 22:59:01.856008] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.145 [2024-04-15 22:59:01.856015] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.145 [2024-04-15 22:59:01.856022] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.145 [2024-04-15 22:59:01.856036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.145 qpair failed and we were unable to recover it. 00:32:17.145 [2024-04-15 22:59:01.865779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.145 [2024-04-15 22:59:01.865837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.145 [2024-04-15 22:59:01.865852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.145 [2024-04-15 22:59:01.865859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.145 [2024-04-15 22:59:01.865869] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.145 [2024-04-15 22:59:01.865882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.145 qpair failed and we were unable to recover it. 00:32:17.145 [2024-04-15 22:59:01.875926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.145 [2024-04-15 22:59:01.875989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.145 [2024-04-15 22:59:01.876004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.145 [2024-04-15 22:59:01.876011] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.145 [2024-04-15 22:59:01.876018] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.145 [2024-04-15 22:59:01.876031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.145 qpair failed and we were unable to recover it. 00:32:17.145 [2024-04-15 22:59:01.886007] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.145 [2024-04-15 22:59:01.886097] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.145 [2024-04-15 22:59:01.886113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.145 [2024-04-15 22:59:01.886120] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.145 [2024-04-15 22:59:01.886127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.145 [2024-04-15 22:59:01.886140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.145 qpair failed and we were unable to recover it. 00:32:17.145 [2024-04-15 22:59:01.896036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.145 [2024-04-15 22:59:01.896098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.145 [2024-04-15 22:59:01.896113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.145 [2024-04-15 22:59:01.896121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.145 [2024-04-15 22:59:01.896127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.145 [2024-04-15 22:59:01.896141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.145 qpair failed and we were unable to recover it. 00:32:17.145 [2024-04-15 22:59:01.905996] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.145 [2024-04-15 22:59:01.906055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.145 [2024-04-15 22:59:01.906070] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.145 [2024-04-15 22:59:01.906077] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.145 [2024-04-15 22:59:01.906084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.145 [2024-04-15 22:59:01.906097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.145 qpair failed and we were unable to recover it. 00:32:17.145 [2024-04-15 22:59:01.916118] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.145 [2024-04-15 22:59:01.916184] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.145 [2024-04-15 22:59:01.916200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.145 [2024-04-15 22:59:01.916207] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.145 [2024-04-15 22:59:01.916214] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.145 [2024-04-15 22:59:01.916227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.145 qpair failed and we were unable to recover it. 00:32:17.145 [2024-04-15 22:59:01.926089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.145 [2024-04-15 22:59:01.926160] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.145 [2024-04-15 22:59:01.926185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.145 [2024-04-15 22:59:01.926195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.145 [2024-04-15 22:59:01.926202] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.145 [2024-04-15 22:59:01.926220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.145 qpair failed and we were unable to recover it. 00:32:17.145 [2024-04-15 22:59:01.936100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.145 [2024-04-15 22:59:01.936175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.145 [2024-04-15 22:59:01.936201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.145 [2024-04-15 22:59:01.936210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.145 [2024-04-15 22:59:01.936217] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.145 [2024-04-15 22:59:01.936236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.145 qpair failed and we were unable to recover it. 00:32:17.145 [2024-04-15 22:59:01.946109] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.145 [2024-04-15 22:59:01.946176] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.145 [2024-04-15 22:59:01.946201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.145 [2024-04-15 22:59:01.946210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.145 [2024-04-15 22:59:01.946218] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.145 [2024-04-15 22:59:01.946236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.145 qpair failed and we were unable to recover it. 00:32:17.408 [2024-04-15 22:59:01.956135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.408 [2024-04-15 22:59:01.956197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.408 [2024-04-15 22:59:01.956214] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.408 [2024-04-15 22:59:01.956222] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.408 [2024-04-15 22:59:01.956233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.408 [2024-04-15 22:59:01.956248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.408 qpair failed and we were unable to recover it. 00:32:17.408 [2024-04-15 22:59:01.966202] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.408 [2024-04-15 22:59:01.966303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.408 [2024-04-15 22:59:01.966319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.408 [2024-04-15 22:59:01.966327] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.408 [2024-04-15 22:59:01.966334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.408 [2024-04-15 22:59:01.966347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.408 qpair failed and we were unable to recover it. 00:32:17.408 [2024-04-15 22:59:01.976232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.408 [2024-04-15 22:59:01.976297] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.408 [2024-04-15 22:59:01.976312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.408 [2024-04-15 22:59:01.976319] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.408 [2024-04-15 22:59:01.976325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.408 [2024-04-15 22:59:01.976339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.408 qpair failed and we were unable to recover it. 00:32:17.408 [2024-04-15 22:59:01.986113] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.408 [2024-04-15 22:59:01.986177] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.408 [2024-04-15 22:59:01.986192] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.408 [2024-04-15 22:59:01.986199] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.408 [2024-04-15 22:59:01.986206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.408 [2024-04-15 22:59:01.986219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.408 qpair failed and we were unable to recover it. 00:32:17.408 [2024-04-15 22:59:01.996232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.408 [2024-04-15 22:59:01.996296] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.408 [2024-04-15 22:59:01.996311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.408 [2024-04-15 22:59:01.996318] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.408 [2024-04-15 22:59:01.996325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.408 [2024-04-15 22:59:01.996338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.408 qpair failed and we were unable to recover it. 00:32:17.408 [2024-04-15 22:59:02.006272] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.408 [2024-04-15 22:59:02.006370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.408 [2024-04-15 22:59:02.006386] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.408 [2024-04-15 22:59:02.006394] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.408 [2024-04-15 22:59:02.006400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.408 [2024-04-15 22:59:02.006414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.408 qpair failed and we were unable to recover it. 00:32:17.408 [2024-04-15 22:59:02.016336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.408 [2024-04-15 22:59:02.016404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.408 [2024-04-15 22:59:02.016420] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.408 [2024-04-15 22:59:02.016427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.408 [2024-04-15 22:59:02.016433] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.408 [2024-04-15 22:59:02.016448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.408 qpair failed and we were unable to recover it. 00:32:17.408 [2024-04-15 22:59:02.026408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.408 [2024-04-15 22:59:02.026470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.408 [2024-04-15 22:59:02.026486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.408 [2024-04-15 22:59:02.026493] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.408 [2024-04-15 22:59:02.026499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.408 [2024-04-15 22:59:02.026513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.408 qpair failed and we were unable to recover it. 00:32:17.408 [2024-04-15 22:59:02.036340] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.408 [2024-04-15 22:59:02.036409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.408 [2024-04-15 22:59:02.036424] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.408 [2024-04-15 22:59:02.036431] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.408 [2024-04-15 22:59:02.036438] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.408 [2024-04-15 22:59:02.036452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.408 qpair failed and we were unable to recover it. 00:32:17.408 [2024-04-15 22:59:02.046360] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.408 [2024-04-15 22:59:02.046434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.408 [2024-04-15 22:59:02.046454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.408 [2024-04-15 22:59:02.046466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.408 [2024-04-15 22:59:02.046472] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.408 [2024-04-15 22:59:02.046487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.408 qpair failed and we were unable to recover it. 00:32:17.408 [2024-04-15 22:59:02.056440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.408 [2024-04-15 22:59:02.056510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.408 [2024-04-15 22:59:02.056526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.408 [2024-04-15 22:59:02.056534] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.408 [2024-04-15 22:59:02.056540] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.408 [2024-04-15 22:59:02.056559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.408 qpair failed and we were unable to recover it. 00:32:17.408 [2024-04-15 22:59:02.066526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.408 [2024-04-15 22:59:02.066593] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.408 [2024-04-15 22:59:02.066609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.408 [2024-04-15 22:59:02.066616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.409 [2024-04-15 22:59:02.066623] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.409 [2024-04-15 22:59:02.066638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.409 qpair failed and we were unable to recover it. 00:32:17.409 [2024-04-15 22:59:02.076453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.409 [2024-04-15 22:59:02.076534] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.409 [2024-04-15 22:59:02.076554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.409 [2024-04-15 22:59:02.076561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.409 [2024-04-15 22:59:02.076568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.409 [2024-04-15 22:59:02.076582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.409 qpair failed and we were unable to recover it. 00:32:17.409 [2024-04-15 22:59:02.086368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.409 [2024-04-15 22:59:02.086430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.409 [2024-04-15 22:59:02.086445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.409 [2024-04-15 22:59:02.086453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.409 [2024-04-15 22:59:02.086459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.409 [2024-04-15 22:59:02.086472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.409 qpair failed and we were unable to recover it. 00:32:17.409 [2024-04-15 22:59:02.096559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.409 [2024-04-15 22:59:02.096664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.409 [2024-04-15 22:59:02.096681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.409 [2024-04-15 22:59:02.096688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.409 [2024-04-15 22:59:02.096694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.409 [2024-04-15 22:59:02.096708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.409 qpair failed and we were unable to recover it. 00:32:17.409 [2024-04-15 22:59:02.106536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.409 [2024-04-15 22:59:02.106598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.409 [2024-04-15 22:59:02.106613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.409 [2024-04-15 22:59:02.106620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.409 [2024-04-15 22:59:02.106627] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.409 [2024-04-15 22:59:02.106640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.409 qpair failed and we were unable to recover it. 00:32:17.409 [2024-04-15 22:59:02.116579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.409 [2024-04-15 22:59:02.116640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.409 [2024-04-15 22:59:02.116656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.409 [2024-04-15 22:59:02.116663] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.409 [2024-04-15 22:59:02.116669] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.409 [2024-04-15 22:59:02.116683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.409 qpair failed and we were unable to recover it. 00:32:17.409 [2024-04-15 22:59:02.126612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.409 [2024-04-15 22:59:02.126672] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.409 [2024-04-15 22:59:02.126687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.409 [2024-04-15 22:59:02.126694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.409 [2024-04-15 22:59:02.126701] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.409 [2024-04-15 22:59:02.126714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.409 qpair failed and we were unable to recover it. 00:32:17.409 [2024-04-15 22:59:02.136633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.409 [2024-04-15 22:59:02.136755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.409 [2024-04-15 22:59:02.136771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.409 [2024-04-15 22:59:02.136781] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.409 [2024-04-15 22:59:02.136788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.409 [2024-04-15 22:59:02.136801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.409 qpair failed and we were unable to recover it. 00:32:17.409 [2024-04-15 22:59:02.146666] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.409 [2024-04-15 22:59:02.146768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.409 [2024-04-15 22:59:02.146783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.409 [2024-04-15 22:59:02.146791] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.409 [2024-04-15 22:59:02.146798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.409 [2024-04-15 22:59:02.146811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.409 qpair failed and we were unable to recover it. 00:32:17.409 [2024-04-15 22:59:02.156681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.409 [2024-04-15 22:59:02.156805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.409 [2024-04-15 22:59:02.156822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.409 [2024-04-15 22:59:02.156829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.409 [2024-04-15 22:59:02.156835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.409 [2024-04-15 22:59:02.156850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.409 qpair failed and we were unable to recover it. 00:32:17.409 [2024-04-15 22:59:02.166709] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.409 [2024-04-15 22:59:02.166768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.409 [2024-04-15 22:59:02.166783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.409 [2024-04-15 22:59:02.166790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.409 [2024-04-15 22:59:02.166797] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.409 [2024-04-15 22:59:02.166810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.409 qpair failed and we were unable to recover it. 00:32:17.409 [2024-04-15 22:59:02.176774] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.409 [2024-04-15 22:59:02.176856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.409 [2024-04-15 22:59:02.176871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.409 [2024-04-15 22:59:02.176879] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.409 [2024-04-15 22:59:02.176886] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.409 [2024-04-15 22:59:02.176899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.409 qpair failed and we were unable to recover it. 00:32:17.409 [2024-04-15 22:59:02.186777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.409 [2024-04-15 22:59:02.186837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.409 [2024-04-15 22:59:02.186852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.409 [2024-04-15 22:59:02.186859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.409 [2024-04-15 22:59:02.186865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.409 [2024-04-15 22:59:02.186878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.409 qpair failed and we were unable to recover it. 00:32:17.409 [2024-04-15 22:59:02.196779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.409 [2024-04-15 22:59:02.196848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.409 [2024-04-15 22:59:02.196862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.409 [2024-04-15 22:59:02.196869] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.409 [2024-04-15 22:59:02.196876] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.409 [2024-04-15 22:59:02.196889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.409 qpair failed and we were unable to recover it. 00:32:17.410 [2024-04-15 22:59:02.206739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.410 [2024-04-15 22:59:02.206804] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.410 [2024-04-15 22:59:02.206819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.410 [2024-04-15 22:59:02.206826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.410 [2024-04-15 22:59:02.206833] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.410 [2024-04-15 22:59:02.206847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.410 qpair failed and we were unable to recover it. 00:32:17.671 [2024-04-15 22:59:02.216868] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.671 [2024-04-15 22:59:02.216929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.671 [2024-04-15 22:59:02.216944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.671 [2024-04-15 22:59:02.216951] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.671 [2024-04-15 22:59:02.216958] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.671 [2024-04-15 22:59:02.216971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.671 qpair failed and we were unable to recover it. 00:32:17.671 [2024-04-15 22:59:02.226891] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.671 [2024-04-15 22:59:02.226956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.671 [2024-04-15 22:59:02.226971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.671 [2024-04-15 22:59:02.226986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.671 [2024-04-15 22:59:02.226993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.671 [2024-04-15 22:59:02.227007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.671 qpair failed and we were unable to recover it. 00:32:17.671 [2024-04-15 22:59:02.236922] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.671 [2024-04-15 22:59:02.237003] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.671 [2024-04-15 22:59:02.237017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.671 [2024-04-15 22:59:02.237025] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.671 [2024-04-15 22:59:02.237031] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.671 [2024-04-15 22:59:02.237044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.671 qpair failed and we were unable to recover it. 00:32:17.671 [2024-04-15 22:59:02.246933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.671 [2024-04-15 22:59:02.246996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.671 [2024-04-15 22:59:02.247011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.671 [2024-04-15 22:59:02.247019] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.671 [2024-04-15 22:59:02.247025] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.671 [2024-04-15 22:59:02.247038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.671 qpair failed and we were unable to recover it. 00:32:17.671 [2024-04-15 22:59:02.256961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.671 [2024-04-15 22:59:02.257025] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.671 [2024-04-15 22:59:02.257039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.671 [2024-04-15 22:59:02.257046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.671 [2024-04-15 22:59:02.257053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.671 [2024-04-15 22:59:02.257066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.671 qpair failed and we were unable to recover it. 00:32:17.671 [2024-04-15 22:59:02.267014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.671 [2024-04-15 22:59:02.267070] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.671 [2024-04-15 22:59:02.267085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.671 [2024-04-15 22:59:02.267092] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.671 [2024-04-15 22:59:02.267099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.671 [2024-04-15 22:59:02.267112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.671 qpair failed and we were unable to recover it. 00:32:17.671 [2024-04-15 22:59:02.276890] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.671 [2024-04-15 22:59:02.276961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.671 [2024-04-15 22:59:02.276976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.671 [2024-04-15 22:59:02.276983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.671 [2024-04-15 22:59:02.276990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.671 [2024-04-15 22:59:02.277003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.671 qpair failed and we were unable to recover it. 00:32:17.671 [2024-04-15 22:59:02.287022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.671 [2024-04-15 22:59:02.287081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.671 [2024-04-15 22:59:02.287096] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.671 [2024-04-15 22:59:02.287103] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.671 [2024-04-15 22:59:02.287110] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.671 [2024-04-15 22:59:02.287123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.671 qpair failed and we were unable to recover it. 00:32:17.671 [2024-04-15 22:59:02.297068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.671 [2024-04-15 22:59:02.297123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.671 [2024-04-15 22:59:02.297138] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.671 [2024-04-15 22:59:02.297145] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.671 [2024-04-15 22:59:02.297152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.671 [2024-04-15 22:59:02.297165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.671 qpair failed and we were unable to recover it. 00:32:17.671 [2024-04-15 22:59:02.307116] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.671 [2024-04-15 22:59:02.307178] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.671 [2024-04-15 22:59:02.307195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.671 [2024-04-15 22:59:02.307202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.671 [2024-04-15 22:59:02.307209] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.671 [2024-04-15 22:59:02.307223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.671 qpair failed and we were unable to recover it. 00:32:17.671 [2024-04-15 22:59:02.317118] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.671 [2024-04-15 22:59:02.317188] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.671 [2024-04-15 22:59:02.317203] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.671 [2024-04-15 22:59:02.317214] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.671 [2024-04-15 22:59:02.317220] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.671 [2024-04-15 22:59:02.317235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.671 qpair failed and we were unable to recover it. 00:32:17.671 [2024-04-15 22:59:02.327145] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.671 [2024-04-15 22:59:02.327201] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.671 [2024-04-15 22:59:02.327217] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.671 [2024-04-15 22:59:02.327224] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.671 [2024-04-15 22:59:02.327231] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.671 [2024-04-15 22:59:02.327244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.671 qpair failed and we were unable to recover it. 00:32:17.671 [2024-04-15 22:59:02.337161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.671 [2024-04-15 22:59:02.337221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.671 [2024-04-15 22:59:02.337236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.671 [2024-04-15 22:59:02.337243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.671 [2024-04-15 22:59:02.337250] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.672 [2024-04-15 22:59:02.337263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.672 qpair failed and we were unable to recover it. 00:32:17.672 [2024-04-15 22:59:02.347199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.672 [2024-04-15 22:59:02.347265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.672 [2024-04-15 22:59:02.347290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.672 [2024-04-15 22:59:02.347298] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.672 [2024-04-15 22:59:02.347306] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.672 [2024-04-15 22:59:02.347325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.672 qpair failed and we were unable to recover it. 00:32:17.672 [2024-04-15 22:59:02.357233] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.672 [2024-04-15 22:59:02.357312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.672 [2024-04-15 22:59:02.357330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.672 [2024-04-15 22:59:02.357337] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.672 [2024-04-15 22:59:02.357345] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.672 [2024-04-15 22:59:02.357359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.672 qpair failed and we were unable to recover it. 00:32:17.672 [2024-04-15 22:59:02.367256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.672 [2024-04-15 22:59:02.367324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.672 [2024-04-15 22:59:02.367349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.672 [2024-04-15 22:59:02.367359] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.672 [2024-04-15 22:59:02.367366] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.672 [2024-04-15 22:59:02.367385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.672 qpair failed and we were unable to recover it. 00:32:17.672 [2024-04-15 22:59:02.377283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.672 [2024-04-15 22:59:02.377353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.672 [2024-04-15 22:59:02.377378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.672 [2024-04-15 22:59:02.377387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.672 [2024-04-15 22:59:02.377395] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.672 [2024-04-15 22:59:02.377413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.672 qpair failed and we were unable to recover it. 00:32:17.672 [2024-04-15 22:59:02.387215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.672 [2024-04-15 22:59:02.387282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.672 [2024-04-15 22:59:02.387300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.672 [2024-04-15 22:59:02.387308] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.672 [2024-04-15 22:59:02.387315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.672 [2024-04-15 22:59:02.387330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.672 qpair failed and we were unable to recover it. 00:32:17.672 [2024-04-15 22:59:02.397336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.672 [2024-04-15 22:59:02.397409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.672 [2024-04-15 22:59:02.397426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.672 [2024-04-15 22:59:02.397434] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.672 [2024-04-15 22:59:02.397441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.672 [2024-04-15 22:59:02.397455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.672 qpair failed and we were unable to recover it. 00:32:17.672 [2024-04-15 22:59:02.407288] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.672 [2024-04-15 22:59:02.407396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.672 [2024-04-15 22:59:02.407413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.672 [2024-04-15 22:59:02.407425] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.672 [2024-04-15 22:59:02.407431] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.672 [2024-04-15 22:59:02.407445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.672 qpair failed and we were unable to recover it. 00:32:17.672 [2024-04-15 22:59:02.417422] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.672 [2024-04-15 22:59:02.417478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.672 [2024-04-15 22:59:02.417494] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.672 [2024-04-15 22:59:02.417501] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.672 [2024-04-15 22:59:02.417508] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.672 [2024-04-15 22:59:02.417521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.672 qpair failed and we were unable to recover it. 00:32:17.672 [2024-04-15 22:59:02.427423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.672 [2024-04-15 22:59:02.427525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.672 [2024-04-15 22:59:02.427540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.672 [2024-04-15 22:59:02.427553] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.672 [2024-04-15 22:59:02.427561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.672 [2024-04-15 22:59:02.427575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.672 qpair failed and we were unable to recover it. 00:32:17.672 [2024-04-15 22:59:02.437478] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.672 [2024-04-15 22:59:02.437550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.672 [2024-04-15 22:59:02.437565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.672 [2024-04-15 22:59:02.437572] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.672 [2024-04-15 22:59:02.437578] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.672 [2024-04-15 22:59:02.437593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.672 qpair failed and we were unable to recover it. 00:32:17.672 [2024-04-15 22:59:02.447358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.672 [2024-04-15 22:59:02.447416] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.672 [2024-04-15 22:59:02.447431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.672 [2024-04-15 22:59:02.447438] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.672 [2024-04-15 22:59:02.447445] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.672 [2024-04-15 22:59:02.447458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.672 qpair failed and we were unable to recover it. 00:32:17.672 [2024-04-15 22:59:02.457506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.672 [2024-04-15 22:59:02.457616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.672 [2024-04-15 22:59:02.457633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.672 [2024-04-15 22:59:02.457640] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.672 [2024-04-15 22:59:02.457647] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.672 [2024-04-15 22:59:02.457660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.672 qpair failed and we were unable to recover it. 00:32:17.672 [2024-04-15 22:59:02.467427] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.672 [2024-04-15 22:59:02.467492] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.672 [2024-04-15 22:59:02.467508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.672 [2024-04-15 22:59:02.467515] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.672 [2024-04-15 22:59:02.467521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.672 [2024-04-15 22:59:02.467537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.672 qpair failed and we were unable to recover it. 00:32:17.672 [2024-04-15 22:59:02.477558] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-04-15 22:59:02.477623] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-04-15 22:59:02.477639] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-04-15 22:59:02.477646] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-04-15 22:59:02.477653] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.673 [2024-04-15 22:59:02.477667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.935 [2024-04-15 22:59:02.487633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.935 [2024-04-15 22:59:02.487700] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.935 [2024-04-15 22:59:02.487715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.935 [2024-04-15 22:59:02.487723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.935 [2024-04-15 22:59:02.487730] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.935 [2024-04-15 22:59:02.487743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.935 qpair failed and we were unable to recover it. 00:32:17.936 [2024-04-15 22:59:02.497535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.936 [2024-04-15 22:59:02.497602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.936 [2024-04-15 22:59:02.497618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.936 [2024-04-15 22:59:02.497629] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.936 [2024-04-15 22:59:02.497636] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.936 [2024-04-15 22:59:02.497650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.936 qpair failed and we were unable to recover it. 00:32:17.936 [2024-04-15 22:59:02.507641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.936 [2024-04-15 22:59:02.507697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.936 [2024-04-15 22:59:02.507713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.936 [2024-04-15 22:59:02.507720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.936 [2024-04-15 22:59:02.507727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.936 [2024-04-15 22:59:02.507741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.936 qpair failed and we were unable to recover it. 00:32:17.936 [2024-04-15 22:59:02.517679] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.936 [2024-04-15 22:59:02.517746] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.936 [2024-04-15 22:59:02.517762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.936 [2024-04-15 22:59:02.517770] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.936 [2024-04-15 22:59:02.517776] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.936 [2024-04-15 22:59:02.517790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.936 qpair failed and we were unable to recover it. 00:32:17.936 [2024-04-15 22:59:02.527704] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.936 [2024-04-15 22:59:02.527767] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.936 [2024-04-15 22:59:02.527784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.936 [2024-04-15 22:59:02.527791] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.936 [2024-04-15 22:59:02.527798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.936 [2024-04-15 22:59:02.527812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.936 qpair failed and we were unable to recover it. 00:32:17.936 [2024-04-15 22:59:02.537733] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.936 [2024-04-15 22:59:02.537799] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.936 [2024-04-15 22:59:02.537814] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.936 [2024-04-15 22:59:02.537821] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.936 [2024-04-15 22:59:02.537830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.936 [2024-04-15 22:59:02.537844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.936 qpair failed and we were unable to recover it. 00:32:17.936 [2024-04-15 22:59:02.547627] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.936 [2024-04-15 22:59:02.547705] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.936 [2024-04-15 22:59:02.547720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.936 [2024-04-15 22:59:02.547728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.936 [2024-04-15 22:59:02.547734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.936 [2024-04-15 22:59:02.547748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.936 qpair failed and we were unable to recover it. 00:32:17.936 [2024-04-15 22:59:02.557787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.936 [2024-04-15 22:59:02.557855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.936 [2024-04-15 22:59:02.557871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.936 [2024-04-15 22:59:02.557878] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.936 [2024-04-15 22:59:02.557884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.936 [2024-04-15 22:59:02.557898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.936 qpair failed and we were unable to recover it. 00:32:17.936 [2024-04-15 22:59:02.567847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.936 [2024-04-15 22:59:02.567942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.936 [2024-04-15 22:59:02.567957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.936 [2024-04-15 22:59:02.567965] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.936 [2024-04-15 22:59:02.567971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.936 [2024-04-15 22:59:02.567985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.936 qpair failed and we were unable to recover it. 00:32:17.936 [2024-04-15 22:59:02.577823] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.936 [2024-04-15 22:59:02.577919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.936 [2024-04-15 22:59:02.577935] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.936 [2024-04-15 22:59:02.577943] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.936 [2024-04-15 22:59:02.577949] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.936 [2024-04-15 22:59:02.577967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.936 qpair failed and we were unable to recover it. 00:32:17.936 [2024-04-15 22:59:02.587859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.936 [2024-04-15 22:59:02.587923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.936 [2024-04-15 22:59:02.587941] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.936 [2024-04-15 22:59:02.587949] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.936 [2024-04-15 22:59:02.587956] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.936 [2024-04-15 22:59:02.587970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.936 qpair failed and we were unable to recover it. 00:32:17.936 [2024-04-15 22:59:02.597902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.936 [2024-04-15 22:59:02.597965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.936 [2024-04-15 22:59:02.597981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.936 [2024-04-15 22:59:02.597988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.936 [2024-04-15 22:59:02.597994] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.936 [2024-04-15 22:59:02.598007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.936 qpair failed and we were unable to recover it. 00:32:17.936 [2024-04-15 22:59:02.607907] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.936 [2024-04-15 22:59:02.607971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.936 [2024-04-15 22:59:02.607986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.936 [2024-04-15 22:59:02.607993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.936 [2024-04-15 22:59:02.607999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.936 [2024-04-15 22:59:02.608012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.936 qpair failed and we were unable to recover it. 00:32:17.936 [2024-04-15 22:59:02.617928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.936 [2024-04-15 22:59:02.618012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.936 [2024-04-15 22:59:02.618027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.936 [2024-04-15 22:59:02.618035] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.936 [2024-04-15 22:59:02.618042] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.936 [2024-04-15 22:59:02.618055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.936 qpair failed and we were unable to recover it. 00:32:17.936 [2024-04-15 22:59:02.627953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.936 [2024-04-15 22:59:02.628020] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.936 [2024-04-15 22:59:02.628035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.937 [2024-04-15 22:59:02.628043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.937 [2024-04-15 22:59:02.628049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.937 [2024-04-15 22:59:02.628062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.937 qpair failed and we were unable to recover it. 00:32:17.937 [2024-04-15 22:59:02.637869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.937 [2024-04-15 22:59:02.637935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.937 [2024-04-15 22:59:02.637951] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.937 [2024-04-15 22:59:02.637958] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.937 [2024-04-15 22:59:02.637964] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.937 [2024-04-15 22:59:02.637978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.937 qpair failed and we were unable to recover it. 00:32:17.937 [2024-04-15 22:59:02.647997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.937 [2024-04-15 22:59:02.648060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.937 [2024-04-15 22:59:02.648075] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.937 [2024-04-15 22:59:02.648083] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.937 [2024-04-15 22:59:02.648089] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.937 [2024-04-15 22:59:02.648103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.937 qpair failed and we were unable to recover it. 00:32:17.937 [2024-04-15 22:59:02.658035] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.937 [2024-04-15 22:59:02.658101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.937 [2024-04-15 22:59:02.658116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.937 [2024-04-15 22:59:02.658123] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.937 [2024-04-15 22:59:02.658130] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.937 [2024-04-15 22:59:02.658144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.937 qpair failed and we were unable to recover it. 00:32:17.937 [2024-04-15 22:59:02.668046] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.937 [2024-04-15 22:59:02.668122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.937 [2024-04-15 22:59:02.668137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.937 [2024-04-15 22:59:02.668145] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.937 [2024-04-15 22:59:02.668152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.937 [2024-04-15 22:59:02.668166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.937 qpair failed and we were unable to recover it. 00:32:17.937 [2024-04-15 22:59:02.678084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.937 [2024-04-15 22:59:02.678151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.937 [2024-04-15 22:59:02.678169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.937 [2024-04-15 22:59:02.678176] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.937 [2024-04-15 22:59:02.678183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.937 [2024-04-15 22:59:02.678197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.937 qpair failed and we were unable to recover it. 00:32:17.937 [2024-04-15 22:59:02.688114] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.937 [2024-04-15 22:59:02.688181] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.937 [2024-04-15 22:59:02.688196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.937 [2024-04-15 22:59:02.688203] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.937 [2024-04-15 22:59:02.688209] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.937 [2024-04-15 22:59:02.688224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.937 qpair failed and we were unable to recover it. 00:32:17.937 [2024-04-15 22:59:02.698142] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.937 [2024-04-15 22:59:02.698225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.937 [2024-04-15 22:59:02.698241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.937 [2024-04-15 22:59:02.698249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.937 [2024-04-15 22:59:02.698255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.937 [2024-04-15 22:59:02.698269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.937 qpair failed and we were unable to recover it. 00:32:17.937 [2024-04-15 22:59:02.708158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.937 [2024-04-15 22:59:02.708227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.937 [2024-04-15 22:59:02.708242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.937 [2024-04-15 22:59:02.708249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.937 [2024-04-15 22:59:02.708256] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.937 [2024-04-15 22:59:02.708269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.937 qpair failed and we were unable to recover it. 00:32:17.937 [2024-04-15 22:59:02.718206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.937 [2024-04-15 22:59:02.718279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.937 [2024-04-15 22:59:02.718305] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.937 [2024-04-15 22:59:02.718315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.937 [2024-04-15 22:59:02.718322] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.937 [2024-04-15 22:59:02.718340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.937 qpair failed and we were unable to recover it. 00:32:17.937 [2024-04-15 22:59:02.728238] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.937 [2024-04-15 22:59:02.728324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.937 [2024-04-15 22:59:02.728344] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.937 [2024-04-15 22:59:02.728352] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.937 [2024-04-15 22:59:02.728359] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.937 [2024-04-15 22:59:02.728374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.937 qpair failed and we were unable to recover it. 00:32:17.937 [2024-04-15 22:59:02.738227] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.937 [2024-04-15 22:59:02.738337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.937 [2024-04-15 22:59:02.738355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.937 [2024-04-15 22:59:02.738362] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.937 [2024-04-15 22:59:02.738369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:17.937 [2024-04-15 22:59:02.738383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.937 qpair failed and we were unable to recover it. 00:32:18.200 [2024-04-15 22:59:02.748268] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.200 [2024-04-15 22:59:02.748332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.200 [2024-04-15 22:59:02.748357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.200 [2024-04-15 22:59:02.748366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.200 [2024-04-15 22:59:02.748373] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.200 [2024-04-15 22:59:02.748392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.200 qpair failed and we were unable to recover it. 00:32:18.201 [2024-04-15 22:59:02.758293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.201 [2024-04-15 22:59:02.758368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.201 [2024-04-15 22:59:02.758393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.201 [2024-04-15 22:59:02.758403] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.201 [2024-04-15 22:59:02.758411] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.201 [2024-04-15 22:59:02.758430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.201 qpair failed and we were unable to recover it. 00:32:18.201 [2024-04-15 22:59:02.768318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.201 [2024-04-15 22:59:02.768386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.201 [2024-04-15 22:59:02.768407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.201 [2024-04-15 22:59:02.768415] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.201 [2024-04-15 22:59:02.768422] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.201 [2024-04-15 22:59:02.768437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.201 qpair failed and we were unable to recover it. 00:32:18.201 [2024-04-15 22:59:02.778346] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.201 [2024-04-15 22:59:02.778456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.201 [2024-04-15 22:59:02.778472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.201 [2024-04-15 22:59:02.778479] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.201 [2024-04-15 22:59:02.778485] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.201 [2024-04-15 22:59:02.778499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.201 qpair failed and we were unable to recover it. 00:32:18.201 [2024-04-15 22:59:02.788392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.201 [2024-04-15 22:59:02.788450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.201 [2024-04-15 22:59:02.788465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.201 [2024-04-15 22:59:02.788473] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.201 [2024-04-15 22:59:02.788479] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.201 [2024-04-15 22:59:02.788493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.201 qpair failed and we were unable to recover it. 00:32:18.201 [2024-04-15 22:59:02.798415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.201 [2024-04-15 22:59:02.798484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.201 [2024-04-15 22:59:02.798499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.201 [2024-04-15 22:59:02.798507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.201 [2024-04-15 22:59:02.798513] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.201 [2024-04-15 22:59:02.798528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.201 qpair failed and we were unable to recover it. 00:32:18.201 [2024-04-15 22:59:02.808428] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.201 [2024-04-15 22:59:02.808493] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.201 [2024-04-15 22:59:02.808509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.201 [2024-04-15 22:59:02.808517] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.201 [2024-04-15 22:59:02.808523] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.201 [2024-04-15 22:59:02.808546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.201 qpair failed and we were unable to recover it. 00:32:18.201 [2024-04-15 22:59:02.818536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.201 [2024-04-15 22:59:02.818603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.201 [2024-04-15 22:59:02.818619] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.201 [2024-04-15 22:59:02.818626] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.201 [2024-04-15 22:59:02.818633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.201 [2024-04-15 22:59:02.818647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.201 qpair failed and we were unable to recover it. 00:32:18.201 [2024-04-15 22:59:02.828487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.201 [2024-04-15 22:59:02.828549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.201 [2024-04-15 22:59:02.828564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.201 [2024-04-15 22:59:02.828572] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.201 [2024-04-15 22:59:02.828578] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.201 [2024-04-15 22:59:02.828592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.201 qpair failed and we were unable to recover it. 00:32:18.201 [2024-04-15 22:59:02.838551] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.201 [2024-04-15 22:59:02.838614] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.201 [2024-04-15 22:59:02.838629] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.201 [2024-04-15 22:59:02.838637] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.201 [2024-04-15 22:59:02.838643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.201 [2024-04-15 22:59:02.838657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.201 qpair failed and we were unable to recover it. 00:32:18.201 [2024-04-15 22:59:02.848493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.201 [2024-04-15 22:59:02.848619] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.201 [2024-04-15 22:59:02.848635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.201 [2024-04-15 22:59:02.848643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.201 [2024-04-15 22:59:02.848649] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.201 [2024-04-15 22:59:02.848663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.201 qpair failed and we were unable to recover it. 00:32:18.201 [2024-04-15 22:59:02.858568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.201 [2024-04-15 22:59:02.858649] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.201 [2024-04-15 22:59:02.858668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.201 [2024-04-15 22:59:02.858676] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.201 [2024-04-15 22:59:02.858682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.201 [2024-04-15 22:59:02.858697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.201 qpair failed and we were unable to recover it. 00:32:18.201 [2024-04-15 22:59:02.868608] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.201 [2024-04-15 22:59:02.868675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.201 [2024-04-15 22:59:02.868691] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.201 [2024-04-15 22:59:02.868698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.201 [2024-04-15 22:59:02.868704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.201 [2024-04-15 22:59:02.868718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.201 qpair failed and we were unable to recover it. 00:32:18.201 [2024-04-15 22:59:02.878606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.201 [2024-04-15 22:59:02.878674] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.201 [2024-04-15 22:59:02.878689] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.201 [2024-04-15 22:59:02.878696] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.201 [2024-04-15 22:59:02.878703] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.202 [2024-04-15 22:59:02.878716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.202 qpair failed and we were unable to recover it. 00:32:18.202 [2024-04-15 22:59:02.888644] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.202 [2024-04-15 22:59:02.888709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.202 [2024-04-15 22:59:02.888724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.202 [2024-04-15 22:59:02.888732] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.202 [2024-04-15 22:59:02.888738] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.202 [2024-04-15 22:59:02.888751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.202 qpair failed and we were unable to recover it. 00:32:18.202 [2024-04-15 22:59:02.898695] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.202 [2024-04-15 22:59:02.898781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.202 [2024-04-15 22:59:02.898797] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.202 [2024-04-15 22:59:02.898804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.202 [2024-04-15 22:59:02.898812] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.202 [2024-04-15 22:59:02.898833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.202 qpair failed and we were unable to recover it. 00:32:18.202 [2024-04-15 22:59:02.908708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.202 [2024-04-15 22:59:02.908774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.202 [2024-04-15 22:59:02.908789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.202 [2024-04-15 22:59:02.908797] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.202 [2024-04-15 22:59:02.908803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.202 [2024-04-15 22:59:02.908816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.202 qpair failed and we were unable to recover it. 00:32:18.202 [2024-04-15 22:59:02.918725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.202 [2024-04-15 22:59:02.918791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.202 [2024-04-15 22:59:02.918807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.202 [2024-04-15 22:59:02.918815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.202 [2024-04-15 22:59:02.918822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.202 [2024-04-15 22:59:02.918835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.202 qpair failed and we were unable to recover it. 00:32:18.202 [2024-04-15 22:59:02.928770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.202 [2024-04-15 22:59:02.928834] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.202 [2024-04-15 22:59:02.928849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.202 [2024-04-15 22:59:02.928856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.202 [2024-04-15 22:59:02.928863] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.202 [2024-04-15 22:59:02.928876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.202 qpair failed and we were unable to recover it. 00:32:18.202 [2024-04-15 22:59:02.938817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.202 [2024-04-15 22:59:02.938879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.202 [2024-04-15 22:59:02.938894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.202 [2024-04-15 22:59:02.938901] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.202 [2024-04-15 22:59:02.938908] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.202 [2024-04-15 22:59:02.938921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.202 qpair failed and we were unable to recover it. 00:32:18.202 [2024-04-15 22:59:02.948829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.202 [2024-04-15 22:59:02.948894] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.202 [2024-04-15 22:59:02.948913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.202 [2024-04-15 22:59:02.948920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.202 [2024-04-15 22:59:02.948926] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.202 [2024-04-15 22:59:02.948940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.202 qpair failed and we were unable to recover it. 00:32:18.202 [2024-04-15 22:59:02.958740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.202 [2024-04-15 22:59:02.958818] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.202 [2024-04-15 22:59:02.958833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.202 [2024-04-15 22:59:02.958841] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.202 [2024-04-15 22:59:02.958848] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.202 [2024-04-15 22:59:02.958861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.202 qpair failed and we were unable to recover it. 00:32:18.202 [2024-04-15 22:59:02.968870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.202 [2024-04-15 22:59:02.968930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.202 [2024-04-15 22:59:02.968944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.202 [2024-04-15 22:59:02.968951] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.202 [2024-04-15 22:59:02.968958] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.202 [2024-04-15 22:59:02.968971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.202 qpair failed and we were unable to recover it. 00:32:18.202 [2024-04-15 22:59:02.978791] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.202 [2024-04-15 22:59:02.978854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.202 [2024-04-15 22:59:02.978869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.202 [2024-04-15 22:59:02.978877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.202 [2024-04-15 22:59:02.978883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.202 [2024-04-15 22:59:02.978896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.202 qpair failed and we were unable to recover it. 00:32:18.202 [2024-04-15 22:59:02.988969] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.202 [2024-04-15 22:59:02.989028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.202 [2024-04-15 22:59:02.989044] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.202 [2024-04-15 22:59:02.989051] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.202 [2024-04-15 22:59:02.989058] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.202 [2024-04-15 22:59:02.989075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.202 qpair failed and we were unable to recover it. 00:32:18.202 [2024-04-15 22:59:02.998958] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.202 [2024-04-15 22:59:02.999026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.202 [2024-04-15 22:59:02.999041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.202 [2024-04-15 22:59:02.999048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.202 [2024-04-15 22:59:02.999054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.202 [2024-04-15 22:59:02.999068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.202 qpair failed and we were unable to recover it. 00:32:18.465 [2024-04-15 22:59:03.008975] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.465 [2024-04-15 22:59:03.009047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.465 [2024-04-15 22:59:03.009062] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.465 [2024-04-15 22:59:03.009069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.465 [2024-04-15 22:59:03.009077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.465 [2024-04-15 22:59:03.009090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.465 qpair failed and we were unable to recover it. 00:32:18.465 [2024-04-15 22:59:03.019000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.465 [2024-04-15 22:59:03.019064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.465 [2024-04-15 22:59:03.019079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.465 [2024-04-15 22:59:03.019087] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.465 [2024-04-15 22:59:03.019094] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.465 [2024-04-15 22:59:03.019107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.465 qpair failed and we were unable to recover it. 00:32:18.465 [2024-04-15 22:59:03.029005] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.465 [2024-04-15 22:59:03.029087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.465 [2024-04-15 22:59:03.029102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.465 [2024-04-15 22:59:03.029111] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.465 [2024-04-15 22:59:03.029118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.465 [2024-04-15 22:59:03.029131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.465 qpair failed and we were unable to recover it. 00:32:18.465 [2024-04-15 22:59:03.039078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.465 [2024-04-15 22:59:03.039139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.465 [2024-04-15 22:59:03.039158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.465 [2024-04-15 22:59:03.039166] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.465 [2024-04-15 22:59:03.039172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.465 [2024-04-15 22:59:03.039186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.465 qpair failed and we were unable to recover it. 00:32:18.465 [2024-04-15 22:59:03.049136] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.465 [2024-04-15 22:59:03.049197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.465 [2024-04-15 22:59:03.049212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.465 [2024-04-15 22:59:03.049220] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.465 [2024-04-15 22:59:03.049226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.465 [2024-04-15 22:59:03.049239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.465 qpair failed and we were unable to recover it. 00:32:18.465 [2024-04-15 22:59:03.059135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.465 [2024-04-15 22:59:03.059202] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.465 [2024-04-15 22:59:03.059227] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.465 [2024-04-15 22:59:03.059236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.465 [2024-04-15 22:59:03.059243] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.465 [2024-04-15 22:59:03.059261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.465 qpair failed and we were unable to recover it. 00:32:18.465 [2024-04-15 22:59:03.069197] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.465 [2024-04-15 22:59:03.069297] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.465 [2024-04-15 22:59:03.069323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.465 [2024-04-15 22:59:03.069331] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.465 [2024-04-15 22:59:03.069338] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.465 [2024-04-15 22:59:03.069356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.465 qpair failed and we were unable to recover it. 00:32:18.465 [2024-04-15 22:59:03.079136] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.465 [2024-04-15 22:59:03.079203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.465 [2024-04-15 22:59:03.079221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.465 [2024-04-15 22:59:03.079228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.465 [2024-04-15 22:59:03.079234] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.465 [2024-04-15 22:59:03.079254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.465 qpair failed and we were unable to recover it. 00:32:18.465 [2024-04-15 22:59:03.089182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.465 [2024-04-15 22:59:03.089243] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.466 [2024-04-15 22:59:03.089259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.466 [2024-04-15 22:59:03.089266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.466 [2024-04-15 22:59:03.089272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.466 [2024-04-15 22:59:03.089286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.466 qpair failed and we were unable to recover it. 00:32:18.466 [2024-04-15 22:59:03.099236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.466 [2024-04-15 22:59:03.099296] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.466 [2024-04-15 22:59:03.099312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.466 [2024-04-15 22:59:03.099319] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.466 [2024-04-15 22:59:03.099326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.466 [2024-04-15 22:59:03.099339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.466 qpair failed and we were unable to recover it. 00:32:18.466 [2024-04-15 22:59:03.109254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.466 [2024-04-15 22:59:03.109316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.466 [2024-04-15 22:59:03.109331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.466 [2024-04-15 22:59:03.109339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.466 [2024-04-15 22:59:03.109345] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.466 [2024-04-15 22:59:03.109358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.466 qpair failed and we were unable to recover it. 00:32:18.466 [2024-04-15 22:59:03.119199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.466 [2024-04-15 22:59:03.119272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.466 [2024-04-15 22:59:03.119288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.466 [2024-04-15 22:59:03.119296] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.466 [2024-04-15 22:59:03.119302] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.466 [2024-04-15 22:59:03.119316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.466 qpair failed and we were unable to recover it. 00:32:18.466 [2024-04-15 22:59:03.129318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.466 [2024-04-15 22:59:03.129384] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.466 [2024-04-15 22:59:03.129403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.466 [2024-04-15 22:59:03.129410] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.466 [2024-04-15 22:59:03.129417] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.466 [2024-04-15 22:59:03.129430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.466 qpair failed and we were unable to recover it. 00:32:18.466 [2024-04-15 22:59:03.139310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.466 [2024-04-15 22:59:03.139379] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.466 [2024-04-15 22:59:03.139395] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.466 [2024-04-15 22:59:03.139403] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.466 [2024-04-15 22:59:03.139409] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.466 [2024-04-15 22:59:03.139423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.466 qpair failed and we were unable to recover it. 00:32:18.466 [2024-04-15 22:59:03.149258] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.466 [2024-04-15 22:59:03.149321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.466 [2024-04-15 22:59:03.149337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.466 [2024-04-15 22:59:03.149344] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.466 [2024-04-15 22:59:03.149351] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.466 [2024-04-15 22:59:03.149364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.466 qpair failed and we were unable to recover it. 00:32:18.466 [2024-04-15 22:59:03.159416] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.466 [2024-04-15 22:59:03.159485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.466 [2024-04-15 22:59:03.159501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.466 [2024-04-15 22:59:03.159508] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.466 [2024-04-15 22:59:03.159516] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.466 [2024-04-15 22:59:03.159530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.466 qpair failed and we were unable to recover it. 00:32:18.466 [2024-04-15 22:59:03.169364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.466 [2024-04-15 22:59:03.169424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.466 [2024-04-15 22:59:03.169439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.466 [2024-04-15 22:59:03.169447] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.466 [2024-04-15 22:59:03.169453] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.466 [2024-04-15 22:59:03.169470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.466 qpair failed and we were unable to recover it. 00:32:18.466 [2024-04-15 22:59:03.179436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.466 [2024-04-15 22:59:03.179500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.466 [2024-04-15 22:59:03.179515] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.466 [2024-04-15 22:59:03.179522] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.466 [2024-04-15 22:59:03.179529] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.466 [2024-04-15 22:59:03.179548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.466 qpair failed and we were unable to recover it. 00:32:18.466 [2024-04-15 22:59:03.189479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.466 [2024-04-15 22:59:03.189541] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.466 [2024-04-15 22:59:03.189560] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.466 [2024-04-15 22:59:03.189567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.466 [2024-04-15 22:59:03.189574] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.466 [2024-04-15 22:59:03.189588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.466 qpair failed and we were unable to recover it. 00:32:18.466 [2024-04-15 22:59:03.199517] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.466 [2024-04-15 22:59:03.199584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.466 [2024-04-15 22:59:03.199600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.466 [2024-04-15 22:59:03.199608] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.466 [2024-04-15 22:59:03.199614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.466 [2024-04-15 22:59:03.199628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.466 qpair failed and we were unable to recover it. 00:32:18.466 [2024-04-15 22:59:03.209529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.466 [2024-04-15 22:59:03.209595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.466 [2024-04-15 22:59:03.209610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.466 [2024-04-15 22:59:03.209618] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.466 [2024-04-15 22:59:03.209625] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.466 [2024-04-15 22:59:03.209639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.466 qpair failed and we were unable to recover it. 00:32:18.466 [2024-04-15 22:59:03.219560] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.466 [2024-04-15 22:59:03.219622] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.466 [2024-04-15 22:59:03.219642] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.466 [2024-04-15 22:59:03.219649] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.466 [2024-04-15 22:59:03.219656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.467 [2024-04-15 22:59:03.219670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.467 qpair failed and we were unable to recover it. 00:32:18.467 [2024-04-15 22:59:03.229647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.467 [2024-04-15 22:59:03.229707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.467 [2024-04-15 22:59:03.229723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.467 [2024-04-15 22:59:03.229730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.467 [2024-04-15 22:59:03.229736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.467 [2024-04-15 22:59:03.229750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.467 qpair failed and we were unable to recover it. 00:32:18.467 [2024-04-15 22:59:03.239606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.467 [2024-04-15 22:59:03.239676] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.467 [2024-04-15 22:59:03.239692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.467 [2024-04-15 22:59:03.239699] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.467 [2024-04-15 22:59:03.239705] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.467 [2024-04-15 22:59:03.239719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.467 qpair failed and we were unable to recover it. 00:32:18.467 [2024-04-15 22:59:03.249650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.467 [2024-04-15 22:59:03.249711] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.467 [2024-04-15 22:59:03.249726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.467 [2024-04-15 22:59:03.249733] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.467 [2024-04-15 22:59:03.249740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.467 [2024-04-15 22:59:03.249753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.467 qpair failed and we were unable to recover it. 00:32:18.467 [2024-04-15 22:59:03.259669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.467 [2024-04-15 22:59:03.259733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.467 [2024-04-15 22:59:03.259748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.467 [2024-04-15 22:59:03.259756] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.467 [2024-04-15 22:59:03.259767] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.467 [2024-04-15 22:59:03.259781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.467 qpair failed and we were unable to recover it. 00:32:18.467 [2024-04-15 22:59:03.269685] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.467 [2024-04-15 22:59:03.269752] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.467 [2024-04-15 22:59:03.269767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.467 [2024-04-15 22:59:03.269774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.467 [2024-04-15 22:59:03.269781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.467 [2024-04-15 22:59:03.269794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.467 qpair failed and we were unable to recover it. 00:32:18.729 [2024-04-15 22:59:03.279669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.729 [2024-04-15 22:59:03.279732] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.729 [2024-04-15 22:59:03.279747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.729 [2024-04-15 22:59:03.279754] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.729 [2024-04-15 22:59:03.279760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.729 [2024-04-15 22:59:03.279774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.729 qpair failed and we were unable to recover it. 00:32:18.729 [2024-04-15 22:59:03.289773] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.729 [2024-04-15 22:59:03.289831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.729 [2024-04-15 22:59:03.289846] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.729 [2024-04-15 22:59:03.289853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.729 [2024-04-15 22:59:03.289860] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.729 [2024-04-15 22:59:03.289873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.729 qpair failed and we were unable to recover it. 00:32:18.729 [2024-04-15 22:59:03.299756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.729 [2024-04-15 22:59:03.299817] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.729 [2024-04-15 22:59:03.299831] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.729 [2024-04-15 22:59:03.299839] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.729 [2024-04-15 22:59:03.299845] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.729 [2024-04-15 22:59:03.299858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.729 qpair failed and we were unable to recover it. 00:32:18.729 [2024-04-15 22:59:03.309678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.729 [2024-04-15 22:59:03.309747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.729 [2024-04-15 22:59:03.309764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.729 [2024-04-15 22:59:03.309771] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.729 [2024-04-15 22:59:03.309777] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.729 [2024-04-15 22:59:03.309792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.729 qpair failed and we were unable to recover it. 00:32:18.729 [2024-04-15 22:59:03.319835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.729 [2024-04-15 22:59:03.319898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.729 [2024-04-15 22:59:03.319913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.729 [2024-04-15 22:59:03.319920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.729 [2024-04-15 22:59:03.319927] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.729 [2024-04-15 22:59:03.319940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.729 qpair failed and we were unable to recover it. 00:32:18.729 [2024-04-15 22:59:03.329830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.729 [2024-04-15 22:59:03.329887] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.729 [2024-04-15 22:59:03.329902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.729 [2024-04-15 22:59:03.329909] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.729 [2024-04-15 22:59:03.329916] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.729 [2024-04-15 22:59:03.329929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.729 qpair failed and we were unable to recover it. 00:32:18.729 [2024-04-15 22:59:03.339748] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.729 [2024-04-15 22:59:03.339805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.729 [2024-04-15 22:59:03.339820] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.729 [2024-04-15 22:59:03.339827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.729 [2024-04-15 22:59:03.339833] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.729 [2024-04-15 22:59:03.339847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.729 qpair failed and we were unable to recover it. 00:32:18.729 [2024-04-15 22:59:03.349899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.729 [2024-04-15 22:59:03.349960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.729 [2024-04-15 22:59:03.349975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.729 [2024-04-15 22:59:03.349982] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.729 [2024-04-15 22:59:03.349992] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.729 [2024-04-15 22:59:03.350005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.729 qpair failed and we were unable to recover it. 00:32:18.729 [2024-04-15 22:59:03.359924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.729 [2024-04-15 22:59:03.360010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.729 [2024-04-15 22:59:03.360025] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.729 [2024-04-15 22:59:03.360032] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.729 [2024-04-15 22:59:03.360038] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.729 [2024-04-15 22:59:03.360051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.729 qpair failed and we were unable to recover it. 00:32:18.729 [2024-04-15 22:59:03.369935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.729 [2024-04-15 22:59:03.369996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.729 [2024-04-15 22:59:03.370012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.729 [2024-04-15 22:59:03.370019] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.729 [2024-04-15 22:59:03.370025] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.729 [2024-04-15 22:59:03.370038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.729 qpair failed and we were unable to recover it. 00:32:18.729 [2024-04-15 22:59:03.379947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.729 [2024-04-15 22:59:03.380010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.729 [2024-04-15 22:59:03.380024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.729 [2024-04-15 22:59:03.380032] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.729 [2024-04-15 22:59:03.380038] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.729 [2024-04-15 22:59:03.380051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.729 qpair failed and we were unable to recover it. 00:32:18.729 [2024-04-15 22:59:03.390051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.729 [2024-04-15 22:59:03.390127] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.729 [2024-04-15 22:59:03.390142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.729 [2024-04-15 22:59:03.390150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.729 [2024-04-15 22:59:03.390157] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.729 [2024-04-15 22:59:03.390170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.729 qpair failed and we were unable to recover it. 00:32:18.730 [2024-04-15 22:59:03.400011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.730 [2024-04-15 22:59:03.400075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.730 [2024-04-15 22:59:03.400090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.730 [2024-04-15 22:59:03.400097] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.730 [2024-04-15 22:59:03.400103] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.730 [2024-04-15 22:59:03.400116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.730 qpair failed and we were unable to recover it. 00:32:18.730 [2024-04-15 22:59:03.410041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.730 [2024-04-15 22:59:03.410103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.730 [2024-04-15 22:59:03.410118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.730 [2024-04-15 22:59:03.410125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.730 [2024-04-15 22:59:03.410132] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.730 [2024-04-15 22:59:03.410145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.730 qpair failed and we were unable to recover it. 00:32:18.730 [2024-04-15 22:59:03.420027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.730 [2024-04-15 22:59:03.420153] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.730 [2024-04-15 22:59:03.420168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.730 [2024-04-15 22:59:03.420176] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.730 [2024-04-15 22:59:03.420182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.730 [2024-04-15 22:59:03.420195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.730 qpair failed and we were unable to recover it. 00:32:18.730 [2024-04-15 22:59:03.430110] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.730 [2024-04-15 22:59:03.430169] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.730 [2024-04-15 22:59:03.430186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.730 [2024-04-15 22:59:03.430193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.730 [2024-04-15 22:59:03.430200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.730 [2024-04-15 22:59:03.430215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.730 qpair failed and we were unable to recover it. 00:32:18.730 [2024-04-15 22:59:03.440141] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.730 [2024-04-15 22:59:03.440214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.730 [2024-04-15 22:59:03.440239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.730 [2024-04-15 22:59:03.440248] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.730 [2024-04-15 22:59:03.440260] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.730 [2024-04-15 22:59:03.440278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.730 qpair failed and we were unable to recover it. 00:32:18.730 [2024-04-15 22:59:03.450163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.730 [2024-04-15 22:59:03.450226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.730 [2024-04-15 22:59:03.450251] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.730 [2024-04-15 22:59:03.450260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.730 [2024-04-15 22:59:03.450267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.730 [2024-04-15 22:59:03.450285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.730 qpair failed and we were unable to recover it. 00:32:18.730 [2024-04-15 22:59:03.460261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.730 [2024-04-15 22:59:03.460330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.730 [2024-04-15 22:59:03.460356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.730 [2024-04-15 22:59:03.460364] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.730 [2024-04-15 22:59:03.460371] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.730 [2024-04-15 22:59:03.460389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.730 qpair failed and we were unable to recover it. 00:32:18.730 [2024-04-15 22:59:03.470243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.730 [2024-04-15 22:59:03.470337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.730 [2024-04-15 22:59:03.470354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.730 [2024-04-15 22:59:03.470361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.730 [2024-04-15 22:59:03.470367] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.730 [2024-04-15 22:59:03.470382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.730 qpair failed and we were unable to recover it. 00:32:18.730 [2024-04-15 22:59:03.480226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.730 [2024-04-15 22:59:03.480290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.730 [2024-04-15 22:59:03.480306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.730 [2024-04-15 22:59:03.480313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.730 [2024-04-15 22:59:03.480319] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.730 [2024-04-15 22:59:03.480333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.730 qpair failed and we were unable to recover it. 00:32:18.730 [2024-04-15 22:59:03.490259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.730 [2024-04-15 22:59:03.490323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.730 [2024-04-15 22:59:03.490338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.730 [2024-04-15 22:59:03.490345] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.730 [2024-04-15 22:59:03.490352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.730 [2024-04-15 22:59:03.490365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.730 qpair failed and we were unable to recover it. 00:32:18.730 [2024-04-15 22:59:03.500197] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.730 [2024-04-15 22:59:03.500269] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.730 [2024-04-15 22:59:03.500294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.730 [2024-04-15 22:59:03.500302] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.730 [2024-04-15 22:59:03.500309] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.730 [2024-04-15 22:59:03.500330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.730 qpair failed and we were unable to recover it. 00:32:18.730 [2024-04-15 22:59:03.510319] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.730 [2024-04-15 22:59:03.510377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.730 [2024-04-15 22:59:03.510394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.730 [2024-04-15 22:59:03.510402] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.730 [2024-04-15 22:59:03.510408] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.730 [2024-04-15 22:59:03.510423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.730 qpair failed and we were unable to recover it. 00:32:18.730 [2024-04-15 22:59:03.520352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.730 [2024-04-15 22:59:03.520432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.730 [2024-04-15 22:59:03.520456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.730 [2024-04-15 22:59:03.520465] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.730 [2024-04-15 22:59:03.520472] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.730 [2024-04-15 22:59:03.520490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.730 qpair failed and we were unable to recover it. 00:32:18.730 [2024-04-15 22:59:03.530382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.730 [2024-04-15 22:59:03.530437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.730 [2024-04-15 22:59:03.530454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.730 [2024-04-15 22:59:03.530462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.731 [2024-04-15 22:59:03.530473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.731 [2024-04-15 22:59:03.530487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.731 qpair failed and we were unable to recover it. 00:32:18.993 [2024-04-15 22:59:03.540286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.993 [2024-04-15 22:59:03.540354] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.993 [2024-04-15 22:59:03.540370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.993 [2024-04-15 22:59:03.540377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.993 [2024-04-15 22:59:03.540384] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.993 [2024-04-15 22:59:03.540397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.993 qpair failed and we were unable to recover it. 00:32:18.993 [2024-04-15 22:59:03.550485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.993 [2024-04-15 22:59:03.550565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.993 [2024-04-15 22:59:03.550581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.993 [2024-04-15 22:59:03.550588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.993 [2024-04-15 22:59:03.550594] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.993 [2024-04-15 22:59:03.550608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.993 qpair failed and we were unable to recover it. 00:32:18.993 [2024-04-15 22:59:03.560364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.993 [2024-04-15 22:59:03.560436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.993 [2024-04-15 22:59:03.560451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.993 [2024-04-15 22:59:03.560458] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.993 [2024-04-15 22:59:03.560464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.993 [2024-04-15 22:59:03.560478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.993 qpair failed and we were unable to recover it. 00:32:18.993 [2024-04-15 22:59:03.570371] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.993 [2024-04-15 22:59:03.570431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.993 [2024-04-15 22:59:03.570446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.993 [2024-04-15 22:59:03.570453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.993 [2024-04-15 22:59:03.570460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.993 [2024-04-15 22:59:03.570473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.993 qpair failed and we were unable to recover it. 00:32:18.993 [2024-04-15 22:59:03.580501] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.993 [2024-04-15 22:59:03.580570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.993 [2024-04-15 22:59:03.580585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.993 [2024-04-15 22:59:03.580593] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.993 [2024-04-15 22:59:03.580599] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.993 [2024-04-15 22:59:03.580613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.993 qpair failed and we were unable to recover it. 00:32:18.993 [2024-04-15 22:59:03.590568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.993 [2024-04-15 22:59:03.590653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.993 [2024-04-15 22:59:03.590669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.993 [2024-04-15 22:59:03.590676] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.993 [2024-04-15 22:59:03.590682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.993 [2024-04-15 22:59:03.590696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.993 qpair failed and we were unable to recover it. 00:32:18.993 [2024-04-15 22:59:03.600589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.993 [2024-04-15 22:59:03.600649] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.993 [2024-04-15 22:59:03.600664] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.993 [2024-04-15 22:59:03.600671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.993 [2024-04-15 22:59:03.600677] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.994 [2024-04-15 22:59:03.600690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.994 qpair failed and we were unable to recover it. 00:32:18.994 [2024-04-15 22:59:03.610527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.994 [2024-04-15 22:59:03.610587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.994 [2024-04-15 22:59:03.610603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.994 [2024-04-15 22:59:03.610610] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.994 [2024-04-15 22:59:03.610616] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.994 [2024-04-15 22:59:03.610630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.994 qpair failed and we were unable to recover it. 00:32:18.994 [2024-04-15 22:59:03.620624] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.994 [2024-04-15 22:59:03.620690] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.994 [2024-04-15 22:59:03.620705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.994 [2024-04-15 22:59:03.620712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.994 [2024-04-15 22:59:03.620722] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.994 [2024-04-15 22:59:03.620737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.994 qpair failed and we were unable to recover it. 00:32:18.994 [2024-04-15 22:59:03.630638] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.994 [2024-04-15 22:59:03.630700] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.994 [2024-04-15 22:59:03.630715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.994 [2024-04-15 22:59:03.630722] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.994 [2024-04-15 22:59:03.630729] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.994 [2024-04-15 22:59:03.630742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.994 qpair failed and we were unable to recover it. 00:32:18.994 [2024-04-15 22:59:03.640671] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.994 [2024-04-15 22:59:03.640736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.994 [2024-04-15 22:59:03.640751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.994 [2024-04-15 22:59:03.640758] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.994 [2024-04-15 22:59:03.640765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.994 [2024-04-15 22:59:03.640778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.994 qpair failed and we were unable to recover it. 00:32:18.994 [2024-04-15 22:59:03.650682] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.994 [2024-04-15 22:59:03.650741] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.994 [2024-04-15 22:59:03.650756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.994 [2024-04-15 22:59:03.650763] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.994 [2024-04-15 22:59:03.650769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.994 [2024-04-15 22:59:03.650783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.994 qpair failed and we were unable to recover it. 00:32:18.994 [2024-04-15 22:59:03.660721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.994 [2024-04-15 22:59:03.660780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.994 [2024-04-15 22:59:03.660795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.994 [2024-04-15 22:59:03.660802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.994 [2024-04-15 22:59:03.660809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.994 [2024-04-15 22:59:03.660822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.994 qpair failed and we were unable to recover it. 00:32:18.994 [2024-04-15 22:59:03.670756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.994 [2024-04-15 22:59:03.670818] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.994 [2024-04-15 22:59:03.670833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.994 [2024-04-15 22:59:03.670840] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.994 [2024-04-15 22:59:03.670847] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.994 [2024-04-15 22:59:03.670860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.994 qpair failed and we were unable to recover it. 00:32:18.994 [2024-04-15 22:59:03.680786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.994 [2024-04-15 22:59:03.680851] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.994 [2024-04-15 22:59:03.680867] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.994 [2024-04-15 22:59:03.680875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.994 [2024-04-15 22:59:03.680881] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.994 [2024-04-15 22:59:03.680894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.994 qpair failed and we were unable to recover it. 00:32:18.994 [2024-04-15 22:59:03.690814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.994 [2024-04-15 22:59:03.690875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.994 [2024-04-15 22:59:03.690890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.994 [2024-04-15 22:59:03.690897] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.994 [2024-04-15 22:59:03.690903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.994 [2024-04-15 22:59:03.690917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.994 qpair failed and we were unable to recover it. 00:32:18.994 [2024-04-15 22:59:03.700825] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.994 [2024-04-15 22:59:03.700887] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.994 [2024-04-15 22:59:03.700902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.994 [2024-04-15 22:59:03.700910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.994 [2024-04-15 22:59:03.700916] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.994 [2024-04-15 22:59:03.700930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.994 qpair failed and we were unable to recover it. 00:32:18.994 [2024-04-15 22:59:03.710896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.994 [2024-04-15 22:59:03.710973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.994 [2024-04-15 22:59:03.710988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.994 [2024-04-15 22:59:03.710999] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.994 [2024-04-15 22:59:03.711005] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.994 [2024-04-15 22:59:03.711018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.994 qpair failed and we were unable to recover it. 00:32:18.994 [2024-04-15 22:59:03.720873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.994 [2024-04-15 22:59:03.720931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.994 [2024-04-15 22:59:03.720946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.994 [2024-04-15 22:59:03.720953] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.994 [2024-04-15 22:59:03.720960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.994 [2024-04-15 22:59:03.720973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.994 qpair failed and we were unable to recover it. 00:32:18.994 [2024-04-15 22:59:03.730896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.994 [2024-04-15 22:59:03.730961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.994 [2024-04-15 22:59:03.730976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.994 [2024-04-15 22:59:03.730983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.994 [2024-04-15 22:59:03.730989] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.995 [2024-04-15 22:59:03.731003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.995 qpair failed and we were unable to recover it. 00:32:18.995 [2024-04-15 22:59:03.740879] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.995 [2024-04-15 22:59:03.740948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.995 [2024-04-15 22:59:03.740963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.995 [2024-04-15 22:59:03.740970] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.995 [2024-04-15 22:59:03.740977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.995 [2024-04-15 22:59:03.740991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.995 qpair failed and we were unable to recover it. 00:32:18.995 [2024-04-15 22:59:03.750941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.995 [2024-04-15 22:59:03.751004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.995 [2024-04-15 22:59:03.751020] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.995 [2024-04-15 22:59:03.751027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.995 [2024-04-15 22:59:03.751033] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.995 [2024-04-15 22:59:03.751046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.995 qpair failed and we were unable to recover it. 00:32:18.995 [2024-04-15 22:59:03.760992] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.995 [2024-04-15 22:59:03.761058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.995 [2024-04-15 22:59:03.761074] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.995 [2024-04-15 22:59:03.761081] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.995 [2024-04-15 22:59:03.761088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.995 [2024-04-15 22:59:03.761101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.995 qpair failed and we were unable to recover it. 00:32:18.995 [2024-04-15 22:59:03.771001] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.995 [2024-04-15 22:59:03.771060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.995 [2024-04-15 22:59:03.771075] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.995 [2024-04-15 22:59:03.771083] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.995 [2024-04-15 22:59:03.771089] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.995 [2024-04-15 22:59:03.771103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.995 qpair failed and we were unable to recover it. 00:32:18.995 [2024-04-15 22:59:03.781076] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.995 [2024-04-15 22:59:03.781148] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.995 [2024-04-15 22:59:03.781165] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.995 [2024-04-15 22:59:03.781176] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.995 [2024-04-15 22:59:03.781183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.995 [2024-04-15 22:59:03.781197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.995 qpair failed and we were unable to recover it. 00:32:18.995 [2024-04-15 22:59:03.791100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.995 [2024-04-15 22:59:03.791161] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.995 [2024-04-15 22:59:03.791177] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.995 [2024-04-15 22:59:03.791185] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.995 [2024-04-15 22:59:03.791192] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:18.995 [2024-04-15 22:59:03.791206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.995 qpair failed and we were unable to recover it. 00:32:19.257 [2024-04-15 22:59:03.801171] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.257 [2024-04-15 22:59:03.801250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.257 [2024-04-15 22:59:03.801265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.257 [2024-04-15 22:59:03.801278] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.257 [2024-04-15 22:59:03.801284] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.257 [2024-04-15 22:59:03.801298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.257 qpair failed and we were unable to recover it. 00:32:19.257 [2024-04-15 22:59:03.811140] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.257 [2024-04-15 22:59:03.811206] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.257 [2024-04-15 22:59:03.811231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.257 [2024-04-15 22:59:03.811241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.257 [2024-04-15 22:59:03.811248] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.257 [2024-04-15 22:59:03.811267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.257 qpair failed and we were unable to recover it. 00:32:19.257 [2024-04-15 22:59:03.821234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.257 [2024-04-15 22:59:03.821303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.257 [2024-04-15 22:59:03.821328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.257 [2024-04-15 22:59:03.821338] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.257 [2024-04-15 22:59:03.821345] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.257 [2024-04-15 22:59:03.821364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.257 qpair failed and we were unable to recover it. 00:32:19.257 [2024-04-15 22:59:03.831178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.257 [2024-04-15 22:59:03.831244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.257 [2024-04-15 22:59:03.831268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.257 [2024-04-15 22:59:03.831277] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.257 [2024-04-15 22:59:03.831285] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.257 [2024-04-15 22:59:03.831303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.257 qpair failed and we were unable to recover it. 00:32:19.257 [2024-04-15 22:59:03.841242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.257 [2024-04-15 22:59:03.841323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.257 [2024-04-15 22:59:03.841348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.257 [2024-04-15 22:59:03.841357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.257 [2024-04-15 22:59:03.841364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.257 [2024-04-15 22:59:03.841382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.257 qpair failed and we were unable to recover it. 00:32:19.257 [2024-04-15 22:59:03.851259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.257 [2024-04-15 22:59:03.851322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.257 [2024-04-15 22:59:03.851339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.257 [2024-04-15 22:59:03.851346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.257 [2024-04-15 22:59:03.851353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.257 [2024-04-15 22:59:03.851368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.257 qpair failed and we were unable to recover it. 00:32:19.257 [2024-04-15 22:59:03.861224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.257 [2024-04-15 22:59:03.861287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.257 [2024-04-15 22:59:03.861303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.257 [2024-04-15 22:59:03.861311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.257 [2024-04-15 22:59:03.861317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.257 [2024-04-15 22:59:03.861332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.257 qpair failed and we were unable to recover it. 00:32:19.257 [2024-04-15 22:59:03.871298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.257 [2024-04-15 22:59:03.871365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.258 [2024-04-15 22:59:03.871390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.258 [2024-04-15 22:59:03.871399] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.258 [2024-04-15 22:59:03.871407] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.258 [2024-04-15 22:59:03.871425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.258 qpair failed and we were unable to recover it. 00:32:19.258 [2024-04-15 22:59:03.881301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.258 [2024-04-15 22:59:03.881364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.258 [2024-04-15 22:59:03.881380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.258 [2024-04-15 22:59:03.881387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.258 [2024-04-15 22:59:03.881394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.258 [2024-04-15 22:59:03.881408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.258 qpair failed and we were unable to recover it. 00:32:19.258 [2024-04-15 22:59:03.891350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.258 [2024-04-15 22:59:03.891405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.258 [2024-04-15 22:59:03.891420] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.258 [2024-04-15 22:59:03.891432] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.258 [2024-04-15 22:59:03.891439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.258 [2024-04-15 22:59:03.891453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.258 qpair failed and we were unable to recover it. 00:32:19.258 [2024-04-15 22:59:03.901376] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.258 [2024-04-15 22:59:03.901446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.258 [2024-04-15 22:59:03.901462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.258 [2024-04-15 22:59:03.901469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.258 [2024-04-15 22:59:03.901476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.258 [2024-04-15 22:59:03.901490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.258 qpair failed and we were unable to recover it. 00:32:19.258 [2024-04-15 22:59:03.911408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.258 [2024-04-15 22:59:03.911469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.258 [2024-04-15 22:59:03.911485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.258 [2024-04-15 22:59:03.911492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.258 [2024-04-15 22:59:03.911498] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.258 [2024-04-15 22:59:03.911512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.258 qpair failed and we were unable to recover it. 00:32:19.258 [2024-04-15 22:59:03.921435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.258 [2024-04-15 22:59:03.921532] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.258 [2024-04-15 22:59:03.921552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.258 [2024-04-15 22:59:03.921560] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.258 [2024-04-15 22:59:03.921566] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.258 [2024-04-15 22:59:03.921580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.258 qpair failed and we were unable to recover it. 00:32:19.258 [2024-04-15 22:59:03.931448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.258 [2024-04-15 22:59:03.931512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.258 [2024-04-15 22:59:03.931527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.258 [2024-04-15 22:59:03.931535] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.258 [2024-04-15 22:59:03.931541] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.258 [2024-04-15 22:59:03.931562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.258 qpair failed and we were unable to recover it. 00:32:19.258 [2024-04-15 22:59:03.941468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.258 [2024-04-15 22:59:03.941548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.258 [2024-04-15 22:59:03.941565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.258 [2024-04-15 22:59:03.941573] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.258 [2024-04-15 22:59:03.941580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.258 [2024-04-15 22:59:03.941595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.258 qpair failed and we were unable to recover it. 00:32:19.258 [2024-04-15 22:59:03.951403] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.258 [2024-04-15 22:59:03.951464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.258 [2024-04-15 22:59:03.951480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.258 [2024-04-15 22:59:03.951488] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.258 [2024-04-15 22:59:03.951494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.258 [2024-04-15 22:59:03.951508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.258 qpair failed and we were unable to recover it. 00:32:19.258 [2024-04-15 22:59:03.961529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.258 [2024-04-15 22:59:03.961591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.258 [2024-04-15 22:59:03.961607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.258 [2024-04-15 22:59:03.961614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.258 [2024-04-15 22:59:03.961621] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.258 [2024-04-15 22:59:03.961634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.258 qpair failed and we were unable to recover it. 00:32:19.258 [2024-04-15 22:59:03.971529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.258 [2024-04-15 22:59:03.971594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.258 [2024-04-15 22:59:03.971609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.258 [2024-04-15 22:59:03.971616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.258 [2024-04-15 22:59:03.971623] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.258 [2024-04-15 22:59:03.971637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.258 qpair failed and we were unable to recover it. 00:32:19.258 [2024-04-15 22:59:03.981476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.258 [2024-04-15 22:59:03.981539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.258 [2024-04-15 22:59:03.981558] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.258 [2024-04-15 22:59:03.981569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.258 [2024-04-15 22:59:03.981576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.258 [2024-04-15 22:59:03.981591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.258 qpair failed and we were unable to recover it. 00:32:19.258 [2024-04-15 22:59:03.991512] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.258 [2024-04-15 22:59:03.991578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.258 [2024-04-15 22:59:03.991593] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.258 [2024-04-15 22:59:03.991600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.258 [2024-04-15 22:59:03.991607] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.258 [2024-04-15 22:59:03.991620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.258 qpair failed and we were unable to recover it. 00:32:19.258 [2024-04-15 22:59:04.001568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.258 [2024-04-15 22:59:04.001633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.258 [2024-04-15 22:59:04.001648] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.258 [2024-04-15 22:59:04.001656] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.258 [2024-04-15 22:59:04.001662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.259 [2024-04-15 22:59:04.001676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.259 qpair failed and we were unable to recover it. 00:32:19.259 [2024-04-15 22:59:04.011700] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.259 [2024-04-15 22:59:04.011780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.259 [2024-04-15 22:59:04.011795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.259 [2024-04-15 22:59:04.011803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.259 [2024-04-15 22:59:04.011809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.259 [2024-04-15 22:59:04.011823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.259 qpair failed and we were unable to recover it. 00:32:19.259 [2024-04-15 22:59:04.021598] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.259 [2024-04-15 22:59:04.021661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.259 [2024-04-15 22:59:04.021676] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.259 [2024-04-15 22:59:04.021683] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.259 [2024-04-15 22:59:04.021690] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.259 [2024-04-15 22:59:04.021703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.259 qpair failed and we were unable to recover it. 00:32:19.259 [2024-04-15 22:59:04.031735] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.259 [2024-04-15 22:59:04.031797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.259 [2024-04-15 22:59:04.031812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.259 [2024-04-15 22:59:04.031819] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.259 [2024-04-15 22:59:04.031826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.259 [2024-04-15 22:59:04.031839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.259 qpair failed and we were unable to recover it. 00:32:19.259 [2024-04-15 22:59:04.041691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.259 [2024-04-15 22:59:04.041756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.259 [2024-04-15 22:59:04.041770] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.259 [2024-04-15 22:59:04.041778] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.259 [2024-04-15 22:59:04.041784] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.259 [2024-04-15 22:59:04.041797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.259 qpair failed and we were unable to recover it. 00:32:19.259 [2024-04-15 22:59:04.051793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.259 [2024-04-15 22:59:04.051890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.259 [2024-04-15 22:59:04.051906] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.259 [2024-04-15 22:59:04.051914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.259 [2024-04-15 22:59:04.051920] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.259 [2024-04-15 22:59:04.051933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.259 qpair failed and we were unable to recover it. 00:32:19.259 [2024-04-15 22:59:04.061816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.259 [2024-04-15 22:59:04.061879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.259 [2024-04-15 22:59:04.061894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.259 [2024-04-15 22:59:04.061901] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.259 [2024-04-15 22:59:04.061908] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.259 [2024-04-15 22:59:04.061921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.259 qpair failed and we were unable to recover it. 00:32:19.521 [2024-04-15 22:59:04.071842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.521 [2024-04-15 22:59:04.071903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.521 [2024-04-15 22:59:04.071918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.521 [2024-04-15 22:59:04.071929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.521 [2024-04-15 22:59:04.071936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.521 [2024-04-15 22:59:04.071950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.521 qpair failed and we were unable to recover it. 00:32:19.521 [2024-04-15 22:59:04.081881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.521 [2024-04-15 22:59:04.081943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.521 [2024-04-15 22:59:04.081958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.521 [2024-04-15 22:59:04.081966] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.521 [2024-04-15 22:59:04.081972] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.521 [2024-04-15 22:59:04.081985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.521 qpair failed and we were unable to recover it. 00:32:19.521 [2024-04-15 22:59:04.091902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.521 [2024-04-15 22:59:04.091966] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.521 [2024-04-15 22:59:04.091981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.521 [2024-04-15 22:59:04.091988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.521 [2024-04-15 22:59:04.091994] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.521 [2024-04-15 22:59:04.092008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.521 qpair failed and we were unable to recover it. 00:32:19.521 [2024-04-15 22:59:04.101999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.521 [2024-04-15 22:59:04.102062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.521 [2024-04-15 22:59:04.102077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.521 [2024-04-15 22:59:04.102084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.521 [2024-04-15 22:59:04.102091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.521 [2024-04-15 22:59:04.102106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.521 qpair failed and we were unable to recover it. 00:32:19.521 [2024-04-15 22:59:04.111964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.521 [2024-04-15 22:59:04.112053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.521 [2024-04-15 22:59:04.112068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.522 [2024-04-15 22:59:04.112075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.522 [2024-04-15 22:59:04.112082] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.522 [2024-04-15 22:59:04.112095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.522 qpair failed and we were unable to recover it. 00:32:19.522 [2024-04-15 22:59:04.121992] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.522 [2024-04-15 22:59:04.122056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.522 [2024-04-15 22:59:04.122071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.522 [2024-04-15 22:59:04.122079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.522 [2024-04-15 22:59:04.122085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.522 [2024-04-15 22:59:04.122099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.522 qpair failed and we were unable to recover it. 00:32:19.522 [2024-04-15 22:59:04.132037] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.522 [2024-04-15 22:59:04.132098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.522 [2024-04-15 22:59:04.132113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.522 [2024-04-15 22:59:04.132120] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.522 [2024-04-15 22:59:04.132127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.522 [2024-04-15 22:59:04.132140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.522 qpair failed and we were unable to recover it. 00:32:19.522 [2024-04-15 22:59:04.141970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.522 [2024-04-15 22:59:04.142037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.522 [2024-04-15 22:59:04.142053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.522 [2024-04-15 22:59:04.142060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.522 [2024-04-15 22:59:04.142066] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.522 [2024-04-15 22:59:04.142080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.522 qpair failed and we were unable to recover it. 00:32:19.522 [2024-04-15 22:59:04.152040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.522 [2024-04-15 22:59:04.152098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.522 [2024-04-15 22:59:04.152113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.522 [2024-04-15 22:59:04.152120] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.522 [2024-04-15 22:59:04.152127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.522 [2024-04-15 22:59:04.152140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.522 qpair failed and we were unable to recover it. 00:32:19.522 [2024-04-15 22:59:04.162095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.522 [2024-04-15 22:59:04.162156] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.522 [2024-04-15 22:59:04.162171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.522 [2024-04-15 22:59:04.162181] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.522 [2024-04-15 22:59:04.162188] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.522 [2024-04-15 22:59:04.162202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.522 qpair failed and we were unable to recover it. 00:32:19.522 [2024-04-15 22:59:04.172103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.522 [2024-04-15 22:59:04.172166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.522 [2024-04-15 22:59:04.172181] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.522 [2024-04-15 22:59:04.172189] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.522 [2024-04-15 22:59:04.172195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.522 [2024-04-15 22:59:04.172208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.522 qpair failed and we were unable to recover it. 00:32:19.522 [2024-04-15 22:59:04.182034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.522 [2024-04-15 22:59:04.182091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.522 [2024-04-15 22:59:04.182106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.522 [2024-04-15 22:59:04.182113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.522 [2024-04-15 22:59:04.182120] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.522 [2024-04-15 22:59:04.182133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.522 qpair failed and we were unable to recover it. 00:32:19.522 [2024-04-15 22:59:04.192178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.522 [2024-04-15 22:59:04.192241] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.522 [2024-04-15 22:59:04.192256] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.522 [2024-04-15 22:59:04.192263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.522 [2024-04-15 22:59:04.192269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.522 [2024-04-15 22:59:04.192282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.522 qpair failed and we were unable to recover it. 00:32:19.522 [2024-04-15 22:59:04.202214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.522 [2024-04-15 22:59:04.202277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.522 [2024-04-15 22:59:04.202292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.522 [2024-04-15 22:59:04.202299] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.522 [2024-04-15 22:59:04.202306] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.522 [2024-04-15 22:59:04.202319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.522 qpair failed and we were unable to recover it. 00:32:19.522 [2024-04-15 22:59:04.212311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.522 [2024-04-15 22:59:04.212382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.522 [2024-04-15 22:59:04.212407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.522 [2024-04-15 22:59:04.212416] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.522 [2024-04-15 22:59:04.212423] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.522 [2024-04-15 22:59:04.212442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.522 qpair failed and we were unable to recover it. 00:32:19.522 [2024-04-15 22:59:04.222244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.522 [2024-04-15 22:59:04.222312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.522 [2024-04-15 22:59:04.222337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.522 [2024-04-15 22:59:04.222347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.522 [2024-04-15 22:59:04.222354] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.522 [2024-04-15 22:59:04.222373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.522 qpair failed and we were unable to recover it. 00:32:19.522 [2024-04-15 22:59:04.232284] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.522 [2024-04-15 22:59:04.232355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.522 [2024-04-15 22:59:04.232381] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.522 [2024-04-15 22:59:04.232389] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.522 [2024-04-15 22:59:04.232396] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.522 [2024-04-15 22:59:04.232414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.522 qpair failed and we were unable to recover it. 00:32:19.522 [2024-04-15 22:59:04.242323] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.522 [2024-04-15 22:59:04.242390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.522 [2024-04-15 22:59:04.242407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.522 [2024-04-15 22:59:04.242415] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.522 [2024-04-15 22:59:04.242422] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.523 [2024-04-15 22:59:04.242436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.523 qpair failed and we were unable to recover it. 00:32:19.523 [2024-04-15 22:59:04.252230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.523 [2024-04-15 22:59:04.252297] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.523 [2024-04-15 22:59:04.252318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.523 [2024-04-15 22:59:04.252326] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.523 [2024-04-15 22:59:04.252333] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.523 [2024-04-15 22:59:04.252348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.523 qpair failed and we were unable to recover it. 00:32:19.523 [2024-04-15 22:59:04.262437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.523 [2024-04-15 22:59:04.262562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.523 [2024-04-15 22:59:04.262578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.523 [2024-04-15 22:59:04.262587] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.523 [2024-04-15 22:59:04.262593] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.523 [2024-04-15 22:59:04.262607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.523 qpair failed and we were unable to recover it. 00:32:19.523 [2024-04-15 22:59:04.272410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.523 [2024-04-15 22:59:04.272484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.523 [2024-04-15 22:59:04.272500] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.523 [2024-04-15 22:59:04.272507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.523 [2024-04-15 22:59:04.272514] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.523 [2024-04-15 22:59:04.272528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.523 qpair failed and we were unable to recover it. 00:32:19.523 [2024-04-15 22:59:04.282386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.523 [2024-04-15 22:59:04.282454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.523 [2024-04-15 22:59:04.282470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.523 [2024-04-15 22:59:04.282478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.523 [2024-04-15 22:59:04.282484] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.523 [2024-04-15 22:59:04.282499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.523 qpair failed and we were unable to recover it. 00:32:19.523 [2024-04-15 22:59:04.292455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.523 [2024-04-15 22:59:04.292516] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.523 [2024-04-15 22:59:04.292532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.523 [2024-04-15 22:59:04.292539] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.523 [2024-04-15 22:59:04.292551] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.523 [2024-04-15 22:59:04.292565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.523 qpair failed and we were unable to recover it. 00:32:19.523 [2024-04-15 22:59:04.302370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.523 [2024-04-15 22:59:04.302439] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.523 [2024-04-15 22:59:04.302456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.523 [2024-04-15 22:59:04.302463] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.523 [2024-04-15 22:59:04.302470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.523 [2024-04-15 22:59:04.302485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.523 qpair failed and we were unable to recover it. 00:32:19.523 [2024-04-15 22:59:04.312509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.523 [2024-04-15 22:59:04.312580] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.523 [2024-04-15 22:59:04.312599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.523 [2024-04-15 22:59:04.312607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.523 [2024-04-15 22:59:04.312614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.523 [2024-04-15 22:59:04.312630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.523 qpair failed and we were unable to recover it. 00:32:19.523 [2024-04-15 22:59:04.322567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.523 [2024-04-15 22:59:04.322637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.523 [2024-04-15 22:59:04.322654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.523 [2024-04-15 22:59:04.322661] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.523 [2024-04-15 22:59:04.322668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.523 [2024-04-15 22:59:04.322683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.523 qpair failed and we were unable to recover it. 00:32:19.786 [2024-04-15 22:59:04.332443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.786 [2024-04-15 22:59:04.332570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.786 [2024-04-15 22:59:04.332586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.786 [2024-04-15 22:59:04.332594] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.786 [2024-04-15 22:59:04.332600] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.786 [2024-04-15 22:59:04.332614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.786 qpair failed and we were unable to recover it. 00:32:19.786 [2024-04-15 22:59:04.342466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.786 [2024-04-15 22:59:04.342528] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.786 [2024-04-15 22:59:04.342552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.786 [2024-04-15 22:59:04.342561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.786 [2024-04-15 22:59:04.342567] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.786 [2024-04-15 22:59:04.342581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.786 qpair failed and we were unable to recover it. 00:32:19.786 [2024-04-15 22:59:04.352623] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.786 [2024-04-15 22:59:04.352728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.786 [2024-04-15 22:59:04.352744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.786 [2024-04-15 22:59:04.352751] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.786 [2024-04-15 22:59:04.352758] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.786 [2024-04-15 22:59:04.352772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.786 qpair failed and we were unable to recover it. 00:32:19.786 [2024-04-15 22:59:04.362664] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.786 [2024-04-15 22:59:04.362733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.786 [2024-04-15 22:59:04.362748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.786 [2024-04-15 22:59:04.362755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.786 [2024-04-15 22:59:04.362762] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.786 [2024-04-15 22:59:04.362777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.786 qpair failed and we were unable to recover it. 00:32:19.786 [2024-04-15 22:59:04.372549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.786 [2024-04-15 22:59:04.372616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.786 [2024-04-15 22:59:04.372632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.786 [2024-04-15 22:59:04.372639] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.786 [2024-04-15 22:59:04.372645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.786 [2024-04-15 22:59:04.372660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.786 qpair failed and we were unable to recover it. 00:32:19.786 [2024-04-15 22:59:04.382583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.786 [2024-04-15 22:59:04.382651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.786 [2024-04-15 22:59:04.382666] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.786 [2024-04-15 22:59:04.382673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.786 [2024-04-15 22:59:04.382680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.786 [2024-04-15 22:59:04.382694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.786 qpair failed and we were unable to recover it. 00:32:19.786 [2024-04-15 22:59:04.392732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.786 [2024-04-15 22:59:04.392791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.786 [2024-04-15 22:59:04.392806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.786 [2024-04-15 22:59:04.392814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.786 [2024-04-15 22:59:04.392820] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.786 [2024-04-15 22:59:04.392833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.786 qpair failed and we were unable to recover it. 00:32:19.786 [2024-04-15 22:59:04.402771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.786 [2024-04-15 22:59:04.402835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.786 [2024-04-15 22:59:04.402850] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.786 [2024-04-15 22:59:04.402858] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.786 [2024-04-15 22:59:04.402864] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.786 [2024-04-15 22:59:04.402877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.786 qpair failed and we were unable to recover it. 00:32:19.786 [2024-04-15 22:59:04.412800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.786 [2024-04-15 22:59:04.412861] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.786 [2024-04-15 22:59:04.412875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.786 [2024-04-15 22:59:04.412883] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.786 [2024-04-15 22:59:04.412889] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.786 [2024-04-15 22:59:04.412903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.786 qpair failed and we were unable to recover it. 00:32:19.786 [2024-04-15 22:59:04.422816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.786 [2024-04-15 22:59:04.422876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.786 [2024-04-15 22:59:04.422892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.786 [2024-04-15 22:59:04.422899] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.786 [2024-04-15 22:59:04.422906] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.786 [2024-04-15 22:59:04.422920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.786 qpair failed and we were unable to recover it. 00:32:19.786 [2024-04-15 22:59:04.432843] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.786 [2024-04-15 22:59:04.432901] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.786 [2024-04-15 22:59:04.432923] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.786 [2024-04-15 22:59:04.432931] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.786 [2024-04-15 22:59:04.432938] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.786 [2024-04-15 22:59:04.432952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.786 qpair failed and we were unable to recover it. 00:32:19.786 [2024-04-15 22:59:04.442859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.786 [2024-04-15 22:59:04.442915] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.787 [2024-04-15 22:59:04.442929] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.787 [2024-04-15 22:59:04.442937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.787 [2024-04-15 22:59:04.442944] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.787 [2024-04-15 22:59:04.442956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.787 qpair failed and we were unable to recover it. 00:32:19.787 [2024-04-15 22:59:04.452852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.787 [2024-04-15 22:59:04.452910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.787 [2024-04-15 22:59:04.452924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.787 [2024-04-15 22:59:04.452932] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.787 [2024-04-15 22:59:04.452938] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.787 [2024-04-15 22:59:04.452953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.787 qpair failed and we were unable to recover it. 00:32:19.787 [2024-04-15 22:59:04.462789] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.787 [2024-04-15 22:59:04.462851] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.787 [2024-04-15 22:59:04.462866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.787 [2024-04-15 22:59:04.462874] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.787 [2024-04-15 22:59:04.462880] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.787 [2024-04-15 22:59:04.462894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.787 qpair failed and we were unable to recover it. 00:32:19.787 [2024-04-15 22:59:04.472826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.787 [2024-04-15 22:59:04.472901] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.787 [2024-04-15 22:59:04.472918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.787 [2024-04-15 22:59:04.472926] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.787 [2024-04-15 22:59:04.472933] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.787 [2024-04-15 22:59:04.472951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.787 qpair failed and we were unable to recover it. 00:32:19.787 [2024-04-15 22:59:04.482971] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.787 [2024-04-15 22:59:04.483035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.787 [2024-04-15 22:59:04.483050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.787 [2024-04-15 22:59:04.483057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.787 [2024-04-15 22:59:04.483064] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.787 [2024-04-15 22:59:04.483077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.787 qpair failed and we were unable to recover it. 00:32:19.787 [2024-04-15 22:59:04.492957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.787 [2024-04-15 22:59:04.493019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.787 [2024-04-15 22:59:04.493034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.787 [2024-04-15 22:59:04.493041] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.787 [2024-04-15 22:59:04.493048] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.787 [2024-04-15 22:59:04.493062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.787 qpair failed and we were unable to recover it. 00:32:19.787 [2024-04-15 22:59:04.503112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.787 [2024-04-15 22:59:04.503176] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.787 [2024-04-15 22:59:04.503191] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.787 [2024-04-15 22:59:04.503199] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.787 [2024-04-15 22:59:04.503205] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.787 [2024-04-15 22:59:04.503219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.787 qpair failed and we were unable to recover it. 00:32:19.787 [2024-04-15 22:59:04.512928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.787 [2024-04-15 22:59:04.512989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.787 [2024-04-15 22:59:04.513004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.787 [2024-04-15 22:59:04.513011] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.787 [2024-04-15 22:59:04.513018] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.787 [2024-04-15 22:59:04.513031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.787 qpair failed and we were unable to recover it. 00:32:19.787 [2024-04-15 22:59:04.523076] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.787 [2024-04-15 22:59:04.523141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.787 [2024-04-15 22:59:04.523160] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.787 [2024-04-15 22:59:04.523167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.787 [2024-04-15 22:59:04.523174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.787 [2024-04-15 22:59:04.523187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.787 qpair failed and we were unable to recover it. 00:32:19.787 [2024-04-15 22:59:04.533091] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.787 [2024-04-15 22:59:04.533145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.787 [2024-04-15 22:59:04.533160] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.787 [2024-04-15 22:59:04.533167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.787 [2024-04-15 22:59:04.533174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.787 [2024-04-15 22:59:04.533187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.787 qpair failed and we were unable to recover it. 00:32:19.787 [2024-04-15 22:59:04.543165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.787 [2024-04-15 22:59:04.543231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.787 [2024-04-15 22:59:04.543246] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.787 [2024-04-15 22:59:04.543254] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.787 [2024-04-15 22:59:04.543260] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.787 [2024-04-15 22:59:04.543274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.787 qpair failed and we were unable to recover it. 00:32:19.787 [2024-04-15 22:59:04.553156] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.787 [2024-04-15 22:59:04.553218] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.787 [2024-04-15 22:59:04.553233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.787 [2024-04-15 22:59:04.553241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.787 [2024-04-15 22:59:04.553247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.787 [2024-04-15 22:59:04.553260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.787 qpair failed and we were unable to recover it. 00:32:19.787 [2024-04-15 22:59:04.563186] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.787 [2024-04-15 22:59:04.563257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.787 [2024-04-15 22:59:04.563273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.787 [2024-04-15 22:59:04.563280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.787 [2024-04-15 22:59:04.563286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.787 [2024-04-15 22:59:04.563304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.787 qpair failed and we were unable to recover it. 00:32:19.787 [2024-04-15 22:59:04.573196] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.787 [2024-04-15 22:59:04.573256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.787 [2024-04-15 22:59:04.573271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.787 [2024-04-15 22:59:04.573278] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.787 [2024-04-15 22:59:04.573285] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc0f8b0 00:32:19.787 [2024-04-15 22:59:04.573298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.787 qpair failed and we were unable to recover it. 00:32:19.788 [2024-04-15 22:59:04.573696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0d600 is same with the state(5) to be set 00:32:19.788 [2024-04-15 22:59:04.583213] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.788 [2024-04-15 22:59:04.583270] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.788 [2024-04-15 22:59:04.583289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.788 [2024-04-15 22:59:04.583296] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.788 [2024-04-15 22:59:04.583301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa648000b90 00:32:19.788 [2024-04-15 22:59:04.583314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.788 qpair failed and we were unable to recover it. 00:32:20.049 [2024-04-15 22:59:04.593255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.049 [2024-04-15 22:59:04.593308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.049 [2024-04-15 22:59:04.593322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.049 [2024-04-15 22:59:04.593328] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.049 [2024-04-15 22:59:04.593333] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa648000b90 00:32:20.049 [2024-04-15 22:59:04.593344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:20.049 qpair failed and we were unable to recover it. 00:32:20.049 [2024-04-15 22:59:04.603298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.049 [2024-04-15 22:59:04.603371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.049 [2024-04-15 22:59:04.603396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.049 [2024-04-15 22:59:04.603405] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.049 [2024-04-15 22:59:04.603412] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa650000b90 00:32:20.049 [2024-04-15 22:59:04.603432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:20.049 qpair failed and we were unable to recover it. 00:32:20.049 [2024-04-15 22:59:04.613325] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.049 [2024-04-15 22:59:04.613394] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.049 [2024-04-15 22:59:04.613418] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.049 [2024-04-15 22:59:04.613427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.049 [2024-04-15 22:59:04.613434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa650000b90 00:32:20.049 [2024-04-15 22:59:04.613454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:20.049 qpair failed and we were unable to recover it. 00:32:20.049 [2024-04-15 22:59:04.623389] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.049 [2024-04-15 22:59:04.623536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.049 [2024-04-15 22:59:04.623610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.049 [2024-04-15 22:59:04.623635] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.049 [2024-04-15 22:59:04.623655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa640000b90 00:32:20.049 [2024-04-15 22:59:04.623708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:20.049 qpair failed and we were unable to recover it. 00:32:20.049 [2024-04-15 22:59:04.633422] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.049 [2024-04-15 22:59:04.633529] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.049 [2024-04-15 22:59:04.633573] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.049 [2024-04-15 22:59:04.633590] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.049 [2024-04-15 22:59:04.633606] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa640000b90 00:32:20.049 [2024-04-15 22:59:04.633640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:20.049 qpair failed and we were unable to recover it. 00:32:20.049 [2024-04-15 22:59:04.634113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0d600 (9): Bad file descriptor 00:32:20.049 Initializing NVMe Controllers 00:32:20.049 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:20.049 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:20.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:20.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:20.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:20.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:20.050 Initialization complete. Launching workers. 00:32:20.050 Starting thread on core 1 00:32:20.050 Starting thread on core 2 00:32:20.050 Starting thread on core 3 00:32:20.050 Starting thread on core 0 00:32:20.050 22:59:04 -- host/target_disconnect.sh@59 -- # sync 00:32:20.050 00:32:20.050 real 0m11.290s 00:32:20.050 user 0m21.198s 00:32:20.050 sys 0m3.664s 00:32:20.050 22:59:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:20.050 22:59:04 -- common/autotest_common.sh@10 -- # set +x 00:32:20.050 ************************************ 00:32:20.050 END TEST nvmf_target_disconnect_tc2 00:32:20.050 ************************************ 00:32:20.050 22:59:04 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:32:20.050 22:59:04 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:20.050 22:59:04 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:32:20.050 22:59:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:20.050 22:59:04 -- nvmf/common.sh@116 -- # sync 00:32:20.050 22:59:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:20.050 22:59:04 -- nvmf/common.sh@119 -- # set +e 00:32:20.050 22:59:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:20.050 22:59:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:20.050 rmmod nvme_tcp 00:32:20.050 rmmod nvme_fabrics 00:32:20.050 rmmod nvme_keyring 00:32:20.050 22:59:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:20.050 22:59:04 -- nvmf/common.sh@123 -- # set -e 00:32:20.050 22:59:04 -- nvmf/common.sh@124 -- # return 0 00:32:20.050 22:59:04 -- nvmf/common.sh@477 -- # '[' -n 1338626 ']' 00:32:20.050 22:59:04 -- nvmf/common.sh@478 -- # killprocess 1338626 00:32:20.050 22:59:04 -- common/autotest_common.sh@926 -- # '[' -z 1338626 ']' 00:32:20.050 22:59:04 -- common/autotest_common.sh@930 -- # kill -0 1338626 00:32:20.050 22:59:04 -- common/autotest_common.sh@931 -- # uname 00:32:20.050 22:59:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:20.050 22:59:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1338626 00:32:20.050 22:59:04 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:32:20.050 22:59:04 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:32:20.050 22:59:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1338626' 00:32:20.050 killing process with pid 1338626 00:32:20.050 22:59:04 -- common/autotest_common.sh@945 -- # kill 1338626 00:32:20.050 22:59:04 -- common/autotest_common.sh@950 -- # wait 1338626 00:32:20.311 22:59:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:20.311 22:59:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:20.311 22:59:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:20.311 22:59:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:20.311 22:59:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:20.311 22:59:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.311 22:59:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:20.311 22:59:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.223 22:59:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:22.223 00:32:22.223 real 0m21.916s 00:32:22.223 user 0m48.822s 00:32:22.223 sys 0m9.894s 00:32:22.223 22:59:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:22.223 22:59:07 -- common/autotest_common.sh@10 -- # set +x 00:32:22.223 ************************************ 00:32:22.223 END TEST nvmf_target_disconnect 00:32:22.223 ************************************ 00:32:22.484 22:59:07 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:32:22.484 22:59:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:22.484 22:59:07 -- common/autotest_common.sh@10 -- # set +x 00:32:22.484 22:59:07 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:32:22.484 00:32:22.484 real 24m56.021s 00:32:22.484 user 64m16.950s 00:32:22.484 sys 7m0.923s 00:32:22.484 22:59:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:22.484 22:59:07 -- common/autotest_common.sh@10 -- # set +x 00:32:22.484 ************************************ 00:32:22.484 END TEST nvmf_tcp 00:32:22.484 ************************************ 00:32:22.484 22:59:07 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:32:22.484 22:59:07 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:22.484 22:59:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:22.484 22:59:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:22.484 22:59:07 -- common/autotest_common.sh@10 -- # set +x 00:32:22.484 ************************************ 00:32:22.484 START TEST spdkcli_nvmf_tcp 00:32:22.484 ************************************ 00:32:22.484 22:59:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:22.484 * Looking for test storage... 00:32:22.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:22.484 22:59:07 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:22.484 22:59:07 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:22.484 22:59:07 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:22.484 22:59:07 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.484 22:59:07 -- nvmf/common.sh@7 -- # uname -s 00:32:22.484 22:59:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.484 22:59:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.484 22:59:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.484 22:59:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.484 22:59:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.484 22:59:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.484 22:59:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.484 22:59:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.484 22:59:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.484 22:59:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.484 22:59:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:22.484 22:59:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:22.484 22:59:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.484 22:59:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.484 22:59:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.484 22:59:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:22.484 22:59:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.484 22:59:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.484 22:59:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.485 22:59:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.485 22:59:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.485 22:59:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.485 22:59:07 -- paths/export.sh@5 -- # export PATH 00:32:22.485 22:59:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.485 22:59:07 -- nvmf/common.sh@46 -- # : 0 00:32:22.485 22:59:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:22.485 22:59:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:22.485 22:59:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:22.485 22:59:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.485 22:59:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.485 22:59:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:22.485 22:59:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:22.485 22:59:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:22.485 22:59:07 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:22.485 22:59:07 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:22.485 22:59:07 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:22.485 22:59:07 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:22.485 22:59:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:22.485 22:59:07 -- common/autotest_common.sh@10 -- # set +x 00:32:22.485 22:59:07 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:22.485 22:59:07 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1340459 00:32:22.485 22:59:07 -- spdkcli/common.sh@34 -- # waitforlisten 1340459 00:32:22.485 22:59:07 -- common/autotest_common.sh@819 -- # '[' -z 1340459 ']' 00:32:22.485 22:59:07 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:22.485 22:59:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.485 22:59:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:22.485 22:59:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.485 22:59:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:22.485 22:59:07 -- common/autotest_common.sh@10 -- # set +x 00:32:22.745 [2024-04-15 22:59:07.314089] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:32:22.745 [2024-04-15 22:59:07.314165] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340459 ] 00:32:22.745 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.745 [2024-04-15 22:59:07.383517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:22.745 [2024-04-15 22:59:07.446434] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:22.745 [2024-04-15 22:59:07.446602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.745 [2024-04-15 22:59:07.446628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.317 22:59:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:23.317 22:59:08 -- common/autotest_common.sh@852 -- # return 0 00:32:23.317 22:59:08 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:23.317 22:59:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:23.317 22:59:08 -- common/autotest_common.sh@10 -- # set +x 00:32:23.317 22:59:08 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:23.317 22:59:08 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:23.317 22:59:08 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:23.317 22:59:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:23.317 22:59:08 -- common/autotest_common.sh@10 -- # set +x 00:32:23.317 22:59:08 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:23.317 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:23.317 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:23.317 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:23.317 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:23.317 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:23.317 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:23.317 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:23.317 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:23.317 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:23.317 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:23.317 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:23.317 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:23.317 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:23.317 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:23.317 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:23.317 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:23.317 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:23.317 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:23.317 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:23.317 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:23.317 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:23.317 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:23.317 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:23.317 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:23.317 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:23.317 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:23.317 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:23.317 ' 00:32:23.889 [2024-04-15 22:59:08.439061] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:32:26.467 [2024-04-15 22:59:10.687930] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.409 [2024-04-15 22:59:11.988154] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:29.952 [2024-04-15 22:59:14.395406] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:31.868 [2024-04-15 22:59:16.477733] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:33.264 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:33.264 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:33.264 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:33.264 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:33.264 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:33.264 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:33.264 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:33.264 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:33.264 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:33.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:33.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:33.265 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:33.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:33.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:33.265 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:33.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:33.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:33.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:33.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:33.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:33.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:33.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:33.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:33.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:33.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:33.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:33.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:33.265 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:33.525 22:59:18 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:33.525 22:59:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:33.525 22:59:18 -- common/autotest_common.sh@10 -- # set +x 00:32:33.525 22:59:18 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:33.525 22:59:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:33.525 22:59:18 -- common/autotest_common.sh@10 -- # set +x 00:32:33.525 22:59:18 -- spdkcli/nvmf.sh@69 -- # check_match 00:32:33.525 22:59:18 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:33.787 22:59:18 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:33.787 22:59:18 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:33.787 22:59:18 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:33.787 22:59:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:33.787 22:59:18 -- common/autotest_common.sh@10 -- # set +x 00:32:34.047 22:59:18 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:34.047 22:59:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:34.047 22:59:18 -- common/autotest_common.sh@10 -- # set +x 00:32:34.047 22:59:18 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:34.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:34.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:34.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:34.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:34.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:34.047 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:34.047 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:34.047 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:34.047 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:34.047 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:34.047 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:34.047 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:34.048 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:34.048 ' 00:32:39.337 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:39.337 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:39.337 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:39.337 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:39.337 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:39.337 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:39.337 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:39.337 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:39.337 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:39.337 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:39.337 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:39.337 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:39.337 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:39.337 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:39.337 22:59:23 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:39.337 22:59:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:39.337 22:59:23 -- common/autotest_common.sh@10 -- # set +x 00:32:39.337 22:59:23 -- spdkcli/nvmf.sh@90 -- # killprocess 1340459 00:32:39.337 22:59:23 -- common/autotest_common.sh@926 -- # '[' -z 1340459 ']' 00:32:39.337 22:59:23 -- common/autotest_common.sh@930 -- # kill -0 1340459 00:32:39.337 22:59:23 -- common/autotest_common.sh@931 -- # uname 00:32:39.337 22:59:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:39.337 22:59:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1340459 00:32:39.337 22:59:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:39.337 22:59:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:39.337 22:59:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1340459' 00:32:39.337 killing process with pid 1340459 00:32:39.337 22:59:23 -- common/autotest_common.sh@945 -- # kill 1340459 00:32:39.337 [2024-04-15 22:59:23.588706] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:32:39.337 22:59:23 -- common/autotest_common.sh@950 -- # wait 1340459 00:32:39.337 22:59:23 -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:39.337 22:59:23 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:39.337 22:59:23 -- spdkcli/common.sh@13 -- # '[' -n 1340459 ']' 00:32:39.337 22:59:23 -- spdkcli/common.sh@14 -- # killprocess 1340459 00:32:39.337 22:59:23 -- common/autotest_common.sh@926 -- # '[' -z 1340459 ']' 00:32:39.337 22:59:23 -- common/autotest_common.sh@930 -- # kill -0 1340459 00:32:39.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1340459) - No such process 00:32:39.337 22:59:23 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1340459 is not found' 00:32:39.337 Process with pid 1340459 is not found 00:32:39.337 22:59:23 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:39.337 22:59:23 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:39.337 22:59:23 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:39.337 00:32:39.337 real 0m16.590s 00:32:39.337 user 0m35.518s 00:32:39.337 sys 0m0.823s 00:32:39.337 22:59:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:39.337 22:59:23 -- common/autotest_common.sh@10 -- # set +x 00:32:39.337 ************************************ 00:32:39.337 END TEST spdkcli_nvmf_tcp 00:32:39.337 ************************************ 00:32:39.337 22:59:23 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:39.337 22:59:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:39.337 22:59:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:39.337 22:59:23 -- common/autotest_common.sh@10 -- # set +x 00:32:39.337 ************************************ 00:32:39.337 START TEST nvmf_identify_passthru 00:32:39.337 ************************************ 00:32:39.337 22:59:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:39.337 * Looking for test storage... 00:32:39.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:39.337 22:59:23 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.337 22:59:23 -- nvmf/common.sh@7 -- # uname -s 00:32:39.337 22:59:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.337 22:59:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.337 22:59:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.337 22:59:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.337 22:59:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.337 22:59:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.337 22:59:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.337 22:59:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.337 22:59:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.337 22:59:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.337 22:59:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:39.337 22:59:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:39.337 22:59:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.337 22:59:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.337 22:59:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.337 22:59:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.337 22:59:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.337 22:59:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.337 22:59:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.337 22:59:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.337 22:59:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.337 22:59:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.337 22:59:23 -- paths/export.sh@5 -- # export PATH 00:32:39.337 22:59:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.337 22:59:23 -- nvmf/common.sh@46 -- # : 0 00:32:39.337 22:59:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:39.337 22:59:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:39.338 22:59:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:39.338 22:59:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.338 22:59:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.338 22:59:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:39.338 22:59:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:39.338 22:59:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:39.338 22:59:23 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.338 22:59:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.338 22:59:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.338 22:59:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.338 22:59:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.338 22:59:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.338 22:59:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.338 22:59:23 -- paths/export.sh@5 -- # export PATH 00:32:39.338 22:59:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.338 22:59:23 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:39.338 22:59:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:39.338 22:59:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.338 22:59:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:39.338 22:59:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:39.338 22:59:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:39.338 22:59:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.338 22:59:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:39.338 22:59:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.338 22:59:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:39.338 22:59:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:39.338 22:59:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:39.338 22:59:23 -- common/autotest_common.sh@10 -- # set +x 00:32:47.487 22:59:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:47.487 22:59:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:47.487 22:59:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:47.487 22:59:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:47.487 22:59:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:47.487 22:59:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:47.487 22:59:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:47.487 22:59:31 -- nvmf/common.sh@294 -- # net_devs=() 00:32:47.487 22:59:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:47.487 22:59:31 -- nvmf/common.sh@295 -- # e810=() 00:32:47.487 22:59:31 -- nvmf/common.sh@295 -- # local -ga e810 00:32:47.487 22:59:31 -- nvmf/common.sh@296 -- # x722=() 00:32:47.487 22:59:31 -- nvmf/common.sh@296 -- # local -ga x722 00:32:47.487 22:59:31 -- nvmf/common.sh@297 -- # mlx=() 00:32:47.487 22:59:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:47.487 22:59:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:47.487 22:59:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:47.487 22:59:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:47.487 22:59:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:47.487 22:59:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:47.487 22:59:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:47.487 22:59:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:47.487 22:59:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:47.487 22:59:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:47.487 22:59:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:47.488 22:59:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:47.488 22:59:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:47.488 22:59:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:47.488 22:59:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:47.488 22:59:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:47.488 22:59:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:47.488 22:59:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:47.488 22:59:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:47.488 22:59:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:47.488 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:47.488 22:59:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:47.488 22:59:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:47.488 22:59:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.488 22:59:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.488 22:59:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:47.488 22:59:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:47.488 22:59:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:47.488 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:47.488 22:59:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:47.488 22:59:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:47.488 22:59:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.488 22:59:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.488 22:59:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:47.488 22:59:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:47.488 22:59:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:47.488 22:59:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:47.488 22:59:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:47.488 22:59:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.488 22:59:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:47.488 22:59:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.488 22:59:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:47.488 Found net devices under 0000:31:00.0: cvl_0_0 00:32:47.488 22:59:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.488 22:59:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:47.488 22:59:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.488 22:59:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:47.488 22:59:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.488 22:59:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:47.488 Found net devices under 0000:31:00.1: cvl_0_1 00:32:47.488 22:59:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.488 22:59:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:47.488 22:59:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:47.488 22:59:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:47.488 22:59:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:47.488 22:59:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:47.488 22:59:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:47.488 22:59:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:47.488 22:59:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:47.488 22:59:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:47.488 22:59:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:47.488 22:59:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:47.488 22:59:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:47.488 22:59:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:47.488 22:59:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:47.488 22:59:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:47.488 22:59:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:47.488 22:59:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:47.488 22:59:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:47.488 22:59:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:47.488 22:59:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:47.488 22:59:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:47.488 22:59:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:47.488 22:59:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:47.488 22:59:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:47.488 22:59:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:47.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:47.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:32:47.488 00:32:47.488 --- 10.0.0.2 ping statistics --- 00:32:47.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.488 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:32:47.488 22:59:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:47.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:47.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:32:47.488 00:32:47.488 --- 10.0.0.1 ping statistics --- 00:32:47.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.488 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:32:47.488 22:59:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:47.488 22:59:32 -- nvmf/common.sh@410 -- # return 0 00:32:47.488 22:59:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:47.488 22:59:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:47.488 22:59:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:47.488 22:59:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:47.488 22:59:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:47.488 22:59:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:47.488 22:59:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:47.488 22:59:32 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:47.488 22:59:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:47.488 22:59:32 -- common/autotest_common.sh@10 -- # set +x 00:32:47.488 22:59:32 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:47.488 22:59:32 -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:47.488 22:59:32 -- common/autotest_common.sh@1509 -- # local bdfs 00:32:47.488 22:59:32 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:47.488 22:59:32 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:47.488 22:59:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:47.488 22:59:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:32:47.488 22:59:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:47.488 22:59:32 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:47.488 22:59:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:47.488 22:59:32 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:47.488 22:59:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:32:47.488 22:59:32 -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:32:47.488 22:59:32 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:32:47.488 22:59:32 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:32:47.488 22:59:32 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:32:47.488 22:59:32 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:47.488 22:59:32 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:47.749 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.011 22:59:32 -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:32:48.011 22:59:32 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:32:48.011 22:59:32 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:48.011 22:59:32 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:48.011 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.584 22:59:33 -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:32:48.584 22:59:33 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:48.584 22:59:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:48.584 22:59:33 -- common/autotest_common.sh@10 -- # set +x 00:32:48.584 22:59:33 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:48.584 22:59:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:48.584 22:59:33 -- common/autotest_common.sh@10 -- # set +x 00:32:48.584 22:59:33 -- target/identify_passthru.sh@31 -- # nvmfpid=1348084 00:32:48.584 22:59:33 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:48.584 22:59:33 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:48.584 22:59:33 -- target/identify_passthru.sh@35 -- # waitforlisten 1348084 00:32:48.584 22:59:33 -- common/autotest_common.sh@819 -- # '[' -z 1348084 ']' 00:32:48.584 22:59:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.584 22:59:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:48.584 22:59:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.584 22:59:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:48.584 22:59:33 -- common/autotest_common.sh@10 -- # set +x 00:32:48.584 [2024-04-15 22:59:33.286899] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:32:48.584 [2024-04-15 22:59:33.286955] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:48.584 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.584 [2024-04-15 22:59:33.368903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:48.846 [2024-04-15 22:59:33.438278] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:48.846 [2024-04-15 22:59:33.438415] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:48.846 [2024-04-15 22:59:33.438424] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:48.846 [2024-04-15 22:59:33.438433] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:48.846 [2024-04-15 22:59:33.438551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:48.846 [2024-04-15 22:59:33.438687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:48.846 [2024-04-15 22:59:33.438898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:48.846 [2024-04-15 22:59:33.438900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.419 22:59:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:49.419 22:59:34 -- common/autotest_common.sh@852 -- # return 0 00:32:49.419 22:59:34 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:49.419 22:59:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:49.419 22:59:34 -- common/autotest_common.sh@10 -- # set +x 00:32:49.419 INFO: Log level set to 20 00:32:49.419 INFO: Requests: 00:32:49.419 { 00:32:49.419 "jsonrpc": "2.0", 00:32:49.419 "method": "nvmf_set_config", 00:32:49.419 "id": 1, 00:32:49.419 "params": { 00:32:49.419 "admin_cmd_passthru": { 00:32:49.419 "identify_ctrlr": true 00:32:49.419 } 00:32:49.419 } 00:32:49.419 } 00:32:49.419 00:32:49.419 INFO: response: 00:32:49.419 { 00:32:49.419 "jsonrpc": "2.0", 00:32:49.419 "id": 1, 00:32:49.419 "result": true 00:32:49.419 } 00:32:49.419 00:32:49.419 22:59:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:49.419 22:59:34 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:49.419 22:59:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:49.419 22:59:34 -- common/autotest_common.sh@10 -- # set +x 00:32:49.419 INFO: Setting log level to 20 00:32:49.419 INFO: Setting log level to 20 00:32:49.419 INFO: Log level set to 20 00:32:49.419 INFO: Log level set to 20 00:32:49.419 INFO: Requests: 00:32:49.419 { 00:32:49.419 "jsonrpc": "2.0", 00:32:49.419 "method": "framework_start_init", 00:32:49.419 "id": 1 00:32:49.419 } 00:32:49.419 00:32:49.419 INFO: Requests: 00:32:49.419 { 00:32:49.419 "jsonrpc": "2.0", 00:32:49.419 "method": "framework_start_init", 00:32:49.419 "id": 1 00:32:49.419 } 00:32:49.419 00:32:49.419 [2024-04-15 22:59:34.149002] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:49.419 INFO: response: 00:32:49.419 { 00:32:49.419 "jsonrpc": "2.0", 00:32:49.419 "id": 1, 00:32:49.419 "result": true 00:32:49.419 } 00:32:49.419 00:32:49.419 INFO: response: 00:32:49.419 { 00:32:49.419 "jsonrpc": "2.0", 00:32:49.419 "id": 1, 00:32:49.419 "result": true 00:32:49.419 } 00:32:49.419 00:32:49.419 22:59:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:49.419 22:59:34 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:49.419 22:59:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:49.419 22:59:34 -- common/autotest_common.sh@10 -- # set +x 00:32:49.419 INFO: Setting log level to 40 00:32:49.419 INFO: Setting log level to 40 00:32:49.419 INFO: Setting log level to 40 00:32:49.419 [2024-04-15 22:59:34.162263] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:49.419 22:59:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:49.419 22:59:34 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:49.419 22:59:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:49.419 22:59:34 -- common/autotest_common.sh@10 -- # set +x 00:32:49.419 22:59:34 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:32:49.419 22:59:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:49.419 22:59:34 -- common/autotest_common.sh@10 -- # set +x 00:32:49.991 Nvme0n1 00:32:49.991 22:59:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:49.991 22:59:34 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:49.991 22:59:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:49.991 22:59:34 -- common/autotest_common.sh@10 -- # set +x 00:32:49.991 22:59:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:49.991 22:59:34 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:49.991 22:59:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:49.991 22:59:34 -- common/autotest_common.sh@10 -- # set +x 00:32:49.991 22:59:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:49.991 22:59:34 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:49.991 22:59:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:49.991 22:59:34 -- common/autotest_common.sh@10 -- # set +x 00:32:49.991 [2024-04-15 22:59:34.548785] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.991 22:59:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:49.991 22:59:34 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:49.991 22:59:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:49.991 22:59:34 -- common/autotest_common.sh@10 -- # set +x 00:32:49.991 [2024-04-15 22:59:34.560617] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:32:49.991 [ 00:32:49.991 { 00:32:49.991 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:49.991 "subtype": "Discovery", 00:32:49.991 "listen_addresses": [], 00:32:49.991 "allow_any_host": true, 00:32:49.991 "hosts": [] 00:32:49.991 }, 00:32:49.991 { 00:32:49.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:49.991 "subtype": "NVMe", 00:32:49.991 "listen_addresses": [ 00:32:49.991 { 00:32:49.991 "transport": "TCP", 00:32:49.991 "trtype": "TCP", 00:32:49.991 "adrfam": "IPv4", 00:32:49.991 "traddr": "10.0.0.2", 00:32:49.991 "trsvcid": "4420" 00:32:49.991 } 00:32:49.991 ], 00:32:49.991 "allow_any_host": true, 00:32:49.991 "hosts": [], 00:32:49.991 "serial_number": "SPDK00000000000001", 00:32:49.991 "model_number": "SPDK bdev Controller", 00:32:49.991 "max_namespaces": 1, 00:32:49.991 "min_cntlid": 1, 00:32:49.991 "max_cntlid": 65519, 00:32:49.991 "namespaces": [ 00:32:49.991 { 00:32:49.991 "nsid": 1, 00:32:49.991 "bdev_name": "Nvme0n1", 00:32:49.991 "name": "Nvme0n1", 00:32:49.991 "nguid": "3634473052605494002538450000001F", 00:32:49.991 "uuid": "36344730-5260-5494-0025-38450000001f" 00:32:49.991 } 00:32:49.991 ] 00:32:49.991 } 00:32:49.991 ] 00:32:49.991 22:59:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:49.991 22:59:34 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:49.991 22:59:34 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:49.991 22:59:34 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:49.991 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.253 22:59:34 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:32:50.253 22:59:34 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:50.253 22:59:34 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:50.253 22:59:34 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:50.253 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.253 22:59:34 -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:32:50.253 22:59:34 -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:32:50.253 22:59:34 -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:32:50.253 22:59:34 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:50.253 22:59:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:50.253 22:59:34 -- common/autotest_common.sh@10 -- # set +x 00:32:50.253 22:59:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:50.253 22:59:34 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:50.253 22:59:34 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:50.253 22:59:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:50.253 22:59:34 -- nvmf/common.sh@116 -- # sync 00:32:50.253 22:59:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:50.253 22:59:34 -- nvmf/common.sh@119 -- # set +e 00:32:50.253 22:59:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:50.253 22:59:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:50.253 rmmod nvme_tcp 00:32:50.253 rmmod nvme_fabrics 00:32:50.253 rmmod nvme_keyring 00:32:50.253 22:59:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:50.253 22:59:35 -- nvmf/common.sh@123 -- # set -e 00:32:50.253 22:59:35 -- nvmf/common.sh@124 -- # return 0 00:32:50.253 22:59:35 -- nvmf/common.sh@477 -- # '[' -n 1348084 ']' 00:32:50.253 22:59:35 -- nvmf/common.sh@478 -- # killprocess 1348084 00:32:50.253 22:59:35 -- common/autotest_common.sh@926 -- # '[' -z 1348084 ']' 00:32:50.253 22:59:35 -- common/autotest_common.sh@930 -- # kill -0 1348084 00:32:50.253 22:59:35 -- common/autotest_common.sh@931 -- # uname 00:32:50.253 22:59:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:50.253 22:59:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1348084 00:32:50.514 22:59:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:50.514 22:59:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:50.514 22:59:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1348084' 00:32:50.514 killing process with pid 1348084 00:32:50.514 22:59:35 -- common/autotest_common.sh@945 -- # kill 1348084 00:32:50.514 [2024-04-15 22:59:35.109395] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:32:50.514 22:59:35 -- common/autotest_common.sh@950 -- # wait 1348084 00:32:50.776 22:59:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:50.776 22:59:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:50.776 22:59:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:50.776 22:59:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:50.776 22:59:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:50.776 22:59:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.776 22:59:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:50.776 22:59:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.691 22:59:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:52.691 00:32:52.691 real 0m13.675s 00:32:52.691 user 0m10.350s 00:32:52.691 sys 0m6.782s 00:32:52.691 22:59:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:52.691 22:59:37 -- common/autotest_common.sh@10 -- # set +x 00:32:52.691 ************************************ 00:32:52.691 END TEST nvmf_identify_passthru 00:32:52.691 ************************************ 00:32:52.691 22:59:37 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:52.691 22:59:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:52.691 22:59:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:52.691 22:59:37 -- common/autotest_common.sh@10 -- # set +x 00:32:52.691 ************************************ 00:32:52.691 START TEST nvmf_dif 00:32:52.691 ************************************ 00:32:52.691 22:59:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:52.952 * Looking for test storage... 00:32:52.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:52.952 22:59:37 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:52.952 22:59:37 -- nvmf/common.sh@7 -- # uname -s 00:32:52.952 22:59:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:52.952 22:59:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:52.952 22:59:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:52.952 22:59:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:52.953 22:59:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:52.953 22:59:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:52.953 22:59:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:52.953 22:59:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:52.953 22:59:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:52.953 22:59:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:52.953 22:59:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:52.953 22:59:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:52.953 22:59:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:52.953 22:59:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:52.953 22:59:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:52.953 22:59:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:52.953 22:59:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:52.953 22:59:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:52.953 22:59:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:52.953 22:59:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.953 22:59:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.953 22:59:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.953 22:59:37 -- paths/export.sh@5 -- # export PATH 00:32:52.953 22:59:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.953 22:59:37 -- nvmf/common.sh@46 -- # : 0 00:32:52.953 22:59:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:52.953 22:59:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:52.953 22:59:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:52.953 22:59:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:52.953 22:59:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:52.953 22:59:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:52.953 22:59:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:52.953 22:59:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:52.953 22:59:37 -- target/dif.sh@15 -- # NULL_META=16 00:32:52.953 22:59:37 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:52.953 22:59:37 -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:52.953 22:59:37 -- target/dif.sh@15 -- # NULL_DIF=1 00:32:52.953 22:59:37 -- target/dif.sh@135 -- # nvmftestinit 00:32:52.953 22:59:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:52.953 22:59:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:52.953 22:59:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:52.953 22:59:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:52.953 22:59:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:52.953 22:59:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.953 22:59:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:52.953 22:59:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.953 22:59:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:52.953 22:59:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:52.953 22:59:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:52.953 22:59:37 -- common/autotest_common.sh@10 -- # set +x 00:33:01.097 22:59:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:01.097 22:59:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:01.097 22:59:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:01.097 22:59:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:01.097 22:59:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:01.097 22:59:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:01.097 22:59:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:01.097 22:59:45 -- nvmf/common.sh@294 -- # net_devs=() 00:33:01.097 22:59:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:01.097 22:59:45 -- nvmf/common.sh@295 -- # e810=() 00:33:01.097 22:59:45 -- nvmf/common.sh@295 -- # local -ga e810 00:33:01.097 22:59:45 -- nvmf/common.sh@296 -- # x722=() 00:33:01.097 22:59:45 -- nvmf/common.sh@296 -- # local -ga x722 00:33:01.097 22:59:45 -- nvmf/common.sh@297 -- # mlx=() 00:33:01.097 22:59:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:01.097 22:59:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:01.097 22:59:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:01.097 22:59:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:01.097 22:59:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:01.097 22:59:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:01.097 22:59:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:01.097 22:59:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:01.097 22:59:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:01.097 22:59:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:01.097 22:59:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:01.097 22:59:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:01.097 22:59:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:01.097 22:59:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:01.097 22:59:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:01.097 22:59:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:01.097 22:59:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:01.097 22:59:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:01.097 22:59:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:01.097 22:59:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:01.097 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:01.097 22:59:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:01.097 22:59:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:01.097 22:59:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:01.097 22:59:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:01.097 22:59:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:01.097 22:59:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:01.097 22:59:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:01.097 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:01.097 22:59:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:01.097 22:59:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:01.097 22:59:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:01.097 22:59:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:01.097 22:59:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:01.097 22:59:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:01.097 22:59:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:01.097 22:59:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:01.097 22:59:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:01.097 22:59:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:01.097 22:59:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:01.097 22:59:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:01.097 22:59:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:01.097 Found net devices under 0000:31:00.0: cvl_0_0 00:33:01.097 22:59:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:01.097 22:59:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:01.097 22:59:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:01.097 22:59:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:01.097 22:59:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:01.097 22:59:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:01.097 Found net devices under 0000:31:00.1: cvl_0_1 00:33:01.097 22:59:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:01.097 22:59:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:01.097 22:59:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:01.097 22:59:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:01.097 22:59:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:01.097 22:59:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:01.097 22:59:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:01.097 22:59:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:01.097 22:59:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:01.097 22:59:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:01.097 22:59:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:01.097 22:59:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:01.097 22:59:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:01.097 22:59:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:01.097 22:59:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:01.097 22:59:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:01.097 22:59:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:01.097 22:59:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:01.097 22:59:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:01.097 22:59:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:01.097 22:59:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:01.097 22:59:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:01.097 22:59:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:01.097 22:59:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:01.097 22:59:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:01.097 22:59:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:01.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:01.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:33:01.097 00:33:01.097 --- 10.0.0.2 ping statistics --- 00:33:01.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.097 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:33:01.097 22:59:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:01.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:01.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:33:01.097 00:33:01.097 --- 10.0.0.1 ping statistics --- 00:33:01.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.097 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:33:01.097 22:59:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:01.097 22:59:45 -- nvmf/common.sh@410 -- # return 0 00:33:01.097 22:59:45 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:33:01.097 22:59:45 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:05.318 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:33:05.318 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:33:05.318 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:33:05.318 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:33:05.318 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:33:05.318 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:33:05.318 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:33:05.318 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:33:05.318 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:33:05.318 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:33:05.318 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:33:05.318 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:33:05.318 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:33:05.318 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:33:05.318 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:33:05.318 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:33:05.318 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:33:05.318 22:59:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:05.318 22:59:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:05.318 22:59:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:05.318 22:59:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:05.318 22:59:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:05.318 22:59:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:05.318 22:59:49 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:05.318 22:59:49 -- target/dif.sh@137 -- # nvmfappstart 00:33:05.318 22:59:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:05.318 22:59:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:05.318 22:59:49 -- common/autotest_common.sh@10 -- # set +x 00:33:05.318 22:59:49 -- nvmf/common.sh@469 -- # nvmfpid=1354969 00:33:05.318 22:59:49 -- nvmf/common.sh@470 -- # waitforlisten 1354969 00:33:05.318 22:59:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:05.318 22:59:49 -- common/autotest_common.sh@819 -- # '[' -z 1354969 ']' 00:33:05.318 22:59:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:05.318 22:59:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:05.318 22:59:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:05.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:05.318 22:59:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:05.318 22:59:49 -- common/autotest_common.sh@10 -- # set +x 00:33:05.318 [2024-04-15 22:59:49.806502] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:33:05.318 [2024-04-15 22:59:49.806607] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:05.318 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.318 [2024-04-15 22:59:49.887416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.318 [2024-04-15 22:59:49.959281] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:05.318 [2024-04-15 22:59:49.959405] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:05.318 [2024-04-15 22:59:49.959414] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:05.318 [2024-04-15 22:59:49.959422] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:05.318 [2024-04-15 22:59:49.959440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.890 22:59:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:05.890 22:59:50 -- common/autotest_common.sh@852 -- # return 0 00:33:05.890 22:59:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:05.890 22:59:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:05.890 22:59:50 -- common/autotest_common.sh@10 -- # set +x 00:33:05.890 22:59:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:05.890 22:59:50 -- target/dif.sh@139 -- # create_transport 00:33:05.890 22:59:50 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:05.890 22:59:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.890 22:59:50 -- common/autotest_common.sh@10 -- # set +x 00:33:05.890 [2024-04-15 22:59:50.607171] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:05.890 22:59:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.890 22:59:50 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:05.890 22:59:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:05.890 22:59:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:05.890 22:59:50 -- common/autotest_common.sh@10 -- # set +x 00:33:05.890 ************************************ 00:33:05.890 START TEST fio_dif_1_default 00:33:05.890 ************************************ 00:33:05.890 22:59:50 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:33:05.890 22:59:50 -- target/dif.sh@86 -- # create_subsystems 0 00:33:05.890 22:59:50 -- target/dif.sh@28 -- # local sub 00:33:05.890 22:59:50 -- target/dif.sh@30 -- # for sub in "$@" 00:33:05.890 22:59:50 -- target/dif.sh@31 -- # create_subsystem 0 00:33:05.890 22:59:50 -- target/dif.sh@18 -- # local sub_id=0 00:33:05.890 22:59:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:05.890 22:59:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.890 22:59:50 -- common/autotest_common.sh@10 -- # set +x 00:33:05.890 bdev_null0 00:33:05.890 22:59:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.890 22:59:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:05.890 22:59:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.890 22:59:50 -- common/autotest_common.sh@10 -- # set +x 00:33:05.890 22:59:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.890 22:59:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:05.890 22:59:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.890 22:59:50 -- common/autotest_common.sh@10 -- # set +x 00:33:05.890 22:59:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.890 22:59:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:05.890 22:59:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.890 22:59:50 -- common/autotest_common.sh@10 -- # set +x 00:33:05.890 [2024-04-15 22:59:50.663441] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:05.890 22:59:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.890 22:59:50 -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:05.890 22:59:50 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:05.890 22:59:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:05.890 22:59:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:05.890 22:59:50 -- nvmf/common.sh@520 -- # config=() 00:33:05.890 22:59:50 -- nvmf/common.sh@520 -- # local subsystem config 00:33:05.890 22:59:50 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:05.890 22:59:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:05.890 22:59:50 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:05.890 22:59:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:05.890 { 00:33:05.890 "params": { 00:33:05.890 "name": "Nvme$subsystem", 00:33:05.890 "trtype": "$TEST_TRANSPORT", 00:33:05.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:05.890 "adrfam": "ipv4", 00:33:05.890 "trsvcid": "$NVMF_PORT", 00:33:05.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:05.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:05.890 "hdgst": ${hdgst:-false}, 00:33:05.890 "ddgst": ${ddgst:-false} 00:33:05.890 }, 00:33:05.890 "method": "bdev_nvme_attach_controller" 00:33:05.890 } 00:33:05.890 EOF 00:33:05.890 )") 00:33:05.890 22:59:50 -- target/dif.sh@82 -- # gen_fio_conf 00:33:05.890 22:59:50 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:05.890 22:59:50 -- target/dif.sh@54 -- # local file 00:33:05.890 22:59:50 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:05.890 22:59:50 -- target/dif.sh@56 -- # cat 00:33:05.890 22:59:50 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:05.890 22:59:50 -- common/autotest_common.sh@1320 -- # shift 00:33:05.890 22:59:50 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:05.890 22:59:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:05.890 22:59:50 -- nvmf/common.sh@542 -- # cat 00:33:05.890 22:59:50 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:05.890 22:59:50 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:05.890 22:59:50 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:05.890 22:59:50 -- target/dif.sh@72 -- # (( file <= files )) 00:33:05.890 22:59:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:05.890 22:59:50 -- nvmf/common.sh@544 -- # jq . 00:33:05.890 22:59:50 -- nvmf/common.sh@545 -- # IFS=, 00:33:05.890 22:59:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:05.890 "params": { 00:33:05.890 "name": "Nvme0", 00:33:05.890 "trtype": "tcp", 00:33:05.890 "traddr": "10.0.0.2", 00:33:05.890 "adrfam": "ipv4", 00:33:05.890 "trsvcid": "4420", 00:33:05.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:05.890 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:05.890 "hdgst": false, 00:33:05.890 "ddgst": false 00:33:05.890 }, 00:33:05.890 "method": "bdev_nvme_attach_controller" 00:33:05.890 }' 00:33:06.218 22:59:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:06.218 22:59:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:06.218 22:59:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:06.218 22:59:50 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:06.218 22:59:50 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:06.218 22:59:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:06.218 22:59:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:06.218 22:59:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:06.218 22:59:50 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:06.218 22:59:50 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:06.497 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:06.497 fio-3.35 00:33:06.497 Starting 1 thread 00:33:06.497 EAL: No free 2048 kB hugepages reported on node 1 00:33:06.758 [2024-04-15 22:59:51.526268] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:06.758 [2024-04-15 22:59:51.526311] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:18.997 00:33:18.997 filename0: (groupid=0, jobs=1): err= 0: pid=1355509: Mon Apr 15 23:00:01 2024 00:33:18.997 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10035msec) 00:33:18.997 slat (nsec): min=5342, max=63967, avg=6189.52, stdev=2461.43 00:33:18.997 clat (usec): min=40979, max=43028, avg=41966.46, stdev=233.26 00:33:18.997 lat (usec): min=40984, max=43065, avg=41972.64, stdev=233.48 00:33:18.997 clat percentiles (usec): 00:33:18.997 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:33:18.997 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:33:18.997 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:18.997 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:33:18.997 | 99.99th=[43254] 00:33:18.997 bw ( KiB/s): min= 352, max= 384, per=99.72%, avg=380.80, stdev= 9.85, samples=20 00:33:18.997 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:33:18.997 lat (msec) : 50=100.00% 00:33:18.997 cpu : usr=95.89%, sys=3.90%, ctx=14, majf=0, minf=310 00:33:18.997 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:18.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:18.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:18.997 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:18.997 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:18.997 00:33:18.997 Run status group 0 (all jobs): 00:33:18.997 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3824KiB (3916kB), run=10035-10035msec 00:33:18.997 23:00:01 -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:18.997 23:00:01 -- target/dif.sh@43 -- # local sub 00:33:18.997 23:00:01 -- target/dif.sh@45 -- # for sub in "$@" 00:33:18.997 23:00:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:18.997 23:00:01 -- target/dif.sh@36 -- # local sub_id=0 00:33:18.997 23:00:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:18.997 23:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.997 23:00:01 -- common/autotest_common.sh@10 -- # set +x 00:33:18.997 23:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.997 23:00:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:18.997 23:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.997 23:00:01 -- common/autotest_common.sh@10 -- # set +x 00:33:18.997 23:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.997 00:33:18.997 real 0m11.218s 00:33:18.997 user 0m22.775s 00:33:18.997 sys 0m0.704s 00:33:18.997 23:00:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:18.997 23:00:01 -- common/autotest_common.sh@10 -- # set +x 00:33:18.997 ************************************ 00:33:18.997 END TEST fio_dif_1_default 00:33:18.997 ************************************ 00:33:18.997 23:00:01 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:18.997 23:00:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:18.997 23:00:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:18.997 23:00:01 -- common/autotest_common.sh@10 -- # set +x 00:33:18.997 ************************************ 00:33:18.997 START TEST fio_dif_1_multi_subsystems 00:33:18.997 ************************************ 00:33:18.997 23:00:01 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:33:18.997 23:00:01 -- target/dif.sh@92 -- # local files=1 00:33:18.997 23:00:01 -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:18.997 23:00:01 -- target/dif.sh@28 -- # local sub 00:33:18.997 23:00:01 -- target/dif.sh@30 -- # for sub in "$@" 00:33:18.997 23:00:01 -- target/dif.sh@31 -- # create_subsystem 0 00:33:18.997 23:00:01 -- target/dif.sh@18 -- # local sub_id=0 00:33:18.997 23:00:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:18.997 23:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.997 23:00:01 -- common/autotest_common.sh@10 -- # set +x 00:33:18.997 bdev_null0 00:33:18.997 23:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.997 23:00:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:18.997 23:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.997 23:00:01 -- common/autotest_common.sh@10 -- # set +x 00:33:18.997 23:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.997 23:00:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:18.997 23:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.997 23:00:01 -- common/autotest_common.sh@10 -- # set +x 00:33:18.997 23:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.997 23:00:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:18.997 23:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.997 23:00:01 -- common/autotest_common.sh@10 -- # set +x 00:33:18.997 [2024-04-15 23:00:01.928910] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.997 23:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.997 23:00:01 -- target/dif.sh@30 -- # for sub in "$@" 00:33:18.997 23:00:01 -- target/dif.sh@31 -- # create_subsystem 1 00:33:18.997 23:00:01 -- target/dif.sh@18 -- # local sub_id=1 00:33:18.997 23:00:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:18.997 23:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.997 23:00:01 -- common/autotest_common.sh@10 -- # set +x 00:33:18.997 bdev_null1 00:33:18.997 23:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.997 23:00:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:18.997 23:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.997 23:00:01 -- common/autotest_common.sh@10 -- # set +x 00:33:18.997 23:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.997 23:00:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:18.997 23:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.997 23:00:01 -- common/autotest_common.sh@10 -- # set +x 00:33:18.997 23:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.997 23:00:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:18.998 23:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.998 23:00:01 -- common/autotest_common.sh@10 -- # set +x 00:33:18.998 23:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.998 23:00:01 -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:18.998 23:00:01 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:18.998 23:00:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:18.998 23:00:01 -- nvmf/common.sh@520 -- # config=() 00:33:18.998 23:00:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:18.998 23:00:01 -- nvmf/common.sh@520 -- # local subsystem config 00:33:18.998 23:00:01 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:18.998 23:00:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:18.998 23:00:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:18.998 { 00:33:18.998 "params": { 00:33:18.998 "name": "Nvme$subsystem", 00:33:18.998 "trtype": "$TEST_TRANSPORT", 00:33:18.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.998 "adrfam": "ipv4", 00:33:18.998 "trsvcid": "$NVMF_PORT", 00:33:18.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.998 "hdgst": ${hdgst:-false}, 00:33:18.998 "ddgst": ${ddgst:-false} 00:33:18.998 }, 00:33:18.998 "method": "bdev_nvme_attach_controller" 00:33:18.998 } 00:33:18.998 EOF 00:33:18.998 )") 00:33:18.998 23:00:01 -- target/dif.sh@82 -- # gen_fio_conf 00:33:18.998 23:00:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:18.998 23:00:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:18.998 23:00:01 -- target/dif.sh@54 -- # local file 00:33:18.998 23:00:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:18.998 23:00:01 -- target/dif.sh@56 -- # cat 00:33:18.998 23:00:01 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:18.998 23:00:01 -- common/autotest_common.sh@1320 -- # shift 00:33:18.998 23:00:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:18.998 23:00:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:18.998 23:00:01 -- nvmf/common.sh@542 -- # cat 00:33:18.998 23:00:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:18.998 23:00:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:18.998 23:00:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:18.998 23:00:01 -- target/dif.sh@72 -- # (( file <= files )) 00:33:18.998 23:00:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:18.998 23:00:01 -- target/dif.sh@73 -- # cat 00:33:18.998 23:00:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:18.998 23:00:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:18.998 { 00:33:18.998 "params": { 00:33:18.998 "name": "Nvme$subsystem", 00:33:18.998 "trtype": "$TEST_TRANSPORT", 00:33:18.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.998 "adrfam": "ipv4", 00:33:18.998 "trsvcid": "$NVMF_PORT", 00:33:18.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.998 "hdgst": ${hdgst:-false}, 00:33:18.998 "ddgst": ${ddgst:-false} 00:33:18.998 }, 00:33:18.998 "method": "bdev_nvme_attach_controller" 00:33:18.998 } 00:33:18.998 EOF 00:33:18.998 )") 00:33:18.998 23:00:01 -- target/dif.sh@72 -- # (( file++ )) 00:33:18.998 23:00:01 -- target/dif.sh@72 -- # (( file <= files )) 00:33:18.998 23:00:01 -- nvmf/common.sh@542 -- # cat 00:33:18.998 23:00:01 -- nvmf/common.sh@544 -- # jq . 00:33:18.998 23:00:02 -- nvmf/common.sh@545 -- # IFS=, 00:33:18.998 23:00:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:18.998 "params": { 00:33:18.998 "name": "Nvme0", 00:33:18.998 "trtype": "tcp", 00:33:18.998 "traddr": "10.0.0.2", 00:33:18.998 "adrfam": "ipv4", 00:33:18.998 "trsvcid": "4420", 00:33:18.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:18.998 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:18.998 "hdgst": false, 00:33:18.998 "ddgst": false 00:33:18.998 }, 00:33:18.998 "method": "bdev_nvme_attach_controller" 00:33:18.998 },{ 00:33:18.998 "params": { 00:33:18.998 "name": "Nvme1", 00:33:18.998 "trtype": "tcp", 00:33:18.998 "traddr": "10.0.0.2", 00:33:18.998 "adrfam": "ipv4", 00:33:18.998 "trsvcid": "4420", 00:33:18.998 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:18.998 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:18.998 "hdgst": false, 00:33:18.998 "ddgst": false 00:33:18.998 }, 00:33:18.998 "method": "bdev_nvme_attach_controller" 00:33:18.998 }' 00:33:18.998 23:00:02 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:18.998 23:00:02 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:18.998 23:00:02 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:18.998 23:00:02 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:18.998 23:00:02 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:18.998 23:00:02 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:18.998 23:00:02 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:18.998 23:00:02 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:18.998 23:00:02 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:18.998 23:00:02 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:18.998 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:18.998 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:18.998 fio-3.35 00:33:18.998 Starting 2 threads 00:33:18.998 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.998 [2024-04-15 23:00:02.960342] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:18.998 [2024-04-15 23:00:02.960391] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:28.997 00:33:28.997 filename0: (groupid=0, jobs=1): err= 0: pid=1357834: Mon Apr 15 23:00:13 2024 00:33:28.997 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10037msec) 00:33:28.997 slat (nsec): min=5341, max=30839, avg=7178.80, stdev=4429.27 00:33:28.997 clat (usec): min=40862, max=42655, avg=41800.54, stdev=375.69 00:33:28.997 lat (usec): min=40867, max=42680, avg=41807.72, stdev=376.14 00:33:28.997 clat percentiles (usec): 00:33:28.997 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:33:28.997 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:33:28.997 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:28.997 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:33:28.997 | 99.99th=[42730] 00:33:28.997 bw ( KiB/s): min= 352, max= 384, per=50.03%, avg=382.40, stdev= 7.16, samples=20 00:33:28.997 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:33:28.997 lat (msec) : 50=100.00% 00:33:28.997 cpu : usr=97.38%, sys=2.42%, ctx=10, majf=0, minf=222 00:33:28.997 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:28.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:28.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:28.997 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:28.997 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:28.997 filename1: (groupid=0, jobs=1): err= 0: pid=1357835: Mon Apr 15 23:00:13 2024 00:33:28.997 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10038msec) 00:33:28.997 slat (nsec): min=5342, max=31458, avg=7230.30, stdev=4384.64 00:33:28.997 clat (usec): min=41008, max=42920, avg=41979.87, stdev=123.54 00:33:28.997 lat (usec): min=41016, max=42926, avg=41987.11, stdev=123.11 00:33:28.997 clat percentiles (usec): 00:33:28.997 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:33:28.997 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:33:28.997 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:28.997 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:28.997 | 99.99th=[42730] 00:33:28.997 bw ( KiB/s): min= 352, max= 384, per=49.77%, avg=380.80, stdev= 9.85, samples=20 00:33:28.997 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:33:28.997 lat (msec) : 50=100.00% 00:33:28.997 cpu : usr=97.27%, sys=2.52%, ctx=20, majf=0, minf=62 00:33:28.997 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:28.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:28.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:28.997 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:28.997 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:28.997 00:33:28.997 Run status group 0 (all jobs): 00:33:28.997 READ: bw=763KiB/s (782kB/s), 381KiB/s-383KiB/s (390kB/s-392kB/s), io=7664KiB (7848kB), run=10037-10038msec 00:33:28.997 23:00:13 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:28.997 23:00:13 -- target/dif.sh@43 -- # local sub 00:33:28.997 23:00:13 -- target/dif.sh@45 -- # for sub in "$@" 00:33:28.997 23:00:13 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:28.997 23:00:13 -- target/dif.sh@36 -- # local sub_id=0 00:33:28.997 23:00:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:28.997 23:00:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:28.997 23:00:13 -- common/autotest_common.sh@10 -- # set +x 00:33:28.997 23:00:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:28.997 23:00:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:28.997 23:00:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:28.997 23:00:13 -- common/autotest_common.sh@10 -- # set +x 00:33:28.997 23:00:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:28.997 23:00:13 -- target/dif.sh@45 -- # for sub in "$@" 00:33:28.997 23:00:13 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:28.997 23:00:13 -- target/dif.sh@36 -- # local sub_id=1 00:33:28.997 23:00:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:28.997 23:00:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:28.997 23:00:13 -- common/autotest_common.sh@10 -- # set +x 00:33:28.997 23:00:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:28.997 23:00:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:28.997 23:00:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:28.998 23:00:13 -- common/autotest_common.sh@10 -- # set +x 00:33:28.998 23:00:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:28.998 00:33:28.998 real 0m11.414s 00:33:28.998 user 0m35.340s 00:33:28.998 sys 0m0.820s 00:33:28.998 23:00:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:28.998 23:00:13 -- common/autotest_common.sh@10 -- # set +x 00:33:28.998 ************************************ 00:33:28.998 END TEST fio_dif_1_multi_subsystems 00:33:28.998 ************************************ 00:33:28.998 23:00:13 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:28.998 23:00:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:28.998 23:00:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:28.998 23:00:13 -- common/autotest_common.sh@10 -- # set +x 00:33:28.998 ************************************ 00:33:28.998 START TEST fio_dif_rand_params 00:33:28.998 ************************************ 00:33:28.998 23:00:13 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:33:28.998 23:00:13 -- target/dif.sh@100 -- # local NULL_DIF 00:33:28.998 23:00:13 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:28.998 23:00:13 -- target/dif.sh@103 -- # NULL_DIF=3 00:33:28.998 23:00:13 -- target/dif.sh@103 -- # bs=128k 00:33:28.998 23:00:13 -- target/dif.sh@103 -- # numjobs=3 00:33:28.998 23:00:13 -- target/dif.sh@103 -- # iodepth=3 00:33:28.998 23:00:13 -- target/dif.sh@103 -- # runtime=5 00:33:28.998 23:00:13 -- target/dif.sh@105 -- # create_subsystems 0 00:33:28.998 23:00:13 -- target/dif.sh@28 -- # local sub 00:33:28.998 23:00:13 -- target/dif.sh@30 -- # for sub in "$@" 00:33:28.998 23:00:13 -- target/dif.sh@31 -- # create_subsystem 0 00:33:28.998 23:00:13 -- target/dif.sh@18 -- # local sub_id=0 00:33:28.998 23:00:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:28.998 23:00:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:28.998 23:00:13 -- common/autotest_common.sh@10 -- # set +x 00:33:28.998 bdev_null0 00:33:28.998 23:00:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:28.998 23:00:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:28.998 23:00:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:28.998 23:00:13 -- common/autotest_common.sh@10 -- # set +x 00:33:28.998 23:00:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:28.998 23:00:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:28.998 23:00:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:28.998 23:00:13 -- common/autotest_common.sh@10 -- # set +x 00:33:28.998 23:00:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:28.998 23:00:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:28.998 23:00:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:28.998 23:00:13 -- common/autotest_common.sh@10 -- # set +x 00:33:28.998 [2024-04-15 23:00:13.390929] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:28.998 23:00:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:28.998 23:00:13 -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:28.998 23:00:13 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:28.998 23:00:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:28.998 23:00:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:28.998 23:00:13 -- nvmf/common.sh@520 -- # config=() 00:33:28.998 23:00:13 -- nvmf/common.sh@520 -- # local subsystem config 00:33:28.998 23:00:13 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:28.998 23:00:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:28.998 23:00:13 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:28.998 23:00:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:28.998 { 00:33:28.998 "params": { 00:33:28.998 "name": "Nvme$subsystem", 00:33:28.998 "trtype": "$TEST_TRANSPORT", 00:33:28.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:28.998 "adrfam": "ipv4", 00:33:28.998 "trsvcid": "$NVMF_PORT", 00:33:28.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:28.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:28.998 "hdgst": ${hdgst:-false}, 00:33:28.998 "ddgst": ${ddgst:-false} 00:33:28.998 }, 00:33:28.998 "method": "bdev_nvme_attach_controller" 00:33:28.998 } 00:33:28.998 EOF 00:33:28.998 )") 00:33:28.998 23:00:13 -- target/dif.sh@82 -- # gen_fio_conf 00:33:28.998 23:00:13 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:28.998 23:00:13 -- target/dif.sh@54 -- # local file 00:33:28.998 23:00:13 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:28.998 23:00:13 -- target/dif.sh@56 -- # cat 00:33:28.998 23:00:13 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:28.998 23:00:13 -- common/autotest_common.sh@1320 -- # shift 00:33:28.998 23:00:13 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:28.998 23:00:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:28.998 23:00:13 -- nvmf/common.sh@542 -- # cat 00:33:28.998 23:00:13 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:28.998 23:00:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:28.998 23:00:13 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:28.998 23:00:13 -- target/dif.sh@72 -- # (( file <= files )) 00:33:28.998 23:00:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:28.998 23:00:13 -- nvmf/common.sh@544 -- # jq . 00:33:28.998 23:00:13 -- nvmf/common.sh@545 -- # IFS=, 00:33:28.998 23:00:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:28.998 "params": { 00:33:28.998 "name": "Nvme0", 00:33:28.998 "trtype": "tcp", 00:33:28.998 "traddr": "10.0.0.2", 00:33:28.998 "adrfam": "ipv4", 00:33:28.998 "trsvcid": "4420", 00:33:28.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:28.998 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:28.998 "hdgst": false, 00:33:28.998 "ddgst": false 00:33:28.998 }, 00:33:28.998 "method": "bdev_nvme_attach_controller" 00:33:28.998 }' 00:33:28.998 23:00:13 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:28.998 23:00:13 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:28.998 23:00:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:28.998 23:00:13 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:28.998 23:00:13 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:28.998 23:00:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:28.998 23:00:13 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:28.998 23:00:13 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:28.998 23:00:13 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:28.998 23:00:13 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:29.259 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:29.259 ... 00:33:29.259 fio-3.35 00:33:29.259 Starting 3 threads 00:33:29.259 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.519 [2024-04-15 23:00:14.290982] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:29.520 [2024-04-15 23:00:14.291026] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:34.836 00:33:34.837 filename0: (groupid=0, jobs=1): err= 0: pid=1360763: Mon Apr 15 23:00:19 2024 00:33:34.837 read: IOPS=166, BW=20.8MiB/s (21.9MB/s)(104MiB/5007msec) 00:33:34.837 slat (nsec): min=5376, max=34476, avg=6944.57, stdev=2139.96 00:33:34.837 clat (usec): min=6661, max=94592, avg=17972.77, stdev=16515.05 00:33:34.837 lat (usec): min=6666, max=94602, avg=17979.71, stdev=16515.33 00:33:34.837 clat percentiles (usec): 00:33:34.837 | 1.00th=[ 7635], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[ 9503], 00:33:34.837 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11338], 60.00th=[11994], 00:33:34.837 | 70.00th=[12911], 80.00th=[14484], 90.00th=[50594], 95.00th=[51643], 00:33:34.837 | 99.00th=[88605], 99.50th=[90702], 99.90th=[94897], 99.95th=[94897], 00:33:34.837 | 99.99th=[94897] 00:33:34.837 bw ( KiB/s): min=13056, max=35584, per=30.44%, avg=21299.20, stdev=6929.15, samples=10 00:33:34.837 iops : min= 102, max= 278, avg=166.40, stdev=54.13, samples=10 00:33:34.837 lat (msec) : 10=30.18%, 20=53.41%, 50=4.19%, 100=12.22% 00:33:34.837 cpu : usr=96.26%, sys=3.50%, ctx=9, majf=0, minf=92 00:33:34.837 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:34.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.837 issued rwts: total=835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.837 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:34.837 filename0: (groupid=0, jobs=1): err= 0: pid=1360764: Mon Apr 15 23:00:19 2024 00:33:34.837 read: IOPS=180, BW=22.6MiB/s (23.7MB/s)(114MiB/5045msec) 00:33:34.837 slat (nsec): min=5354, max=32806, avg=6532.31, stdev=1590.70 00:33:34.837 clat (usec): min=6061, max=91739, avg=16520.98, stdev=15330.46 00:33:34.837 lat (usec): min=6068, max=91746, avg=16527.52, stdev=15330.42 00:33:34.837 clat percentiles (usec): 00:33:34.837 | 1.00th=[ 6456], 5.00th=[ 7373], 10.00th=[ 7701], 20.00th=[ 8586], 00:33:34.837 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[10814], 60.00th=[11469], 00:33:34.837 | 70.00th=[12256], 80.00th=[13566], 90.00th=[50070], 95.00th=[51643], 00:33:34.837 | 99.00th=[54264], 99.50th=[88605], 99.90th=[91751], 99.95th=[91751], 00:33:34.837 | 99.99th=[91751] 00:33:34.837 bw ( KiB/s): min=15360, max=28928, per=33.33%, avg=23321.60, stdev=4621.33, samples=10 00:33:34.837 iops : min= 120, max= 226, avg=182.20, stdev=36.10, samples=10 00:33:34.837 lat (msec) : 10=39.32%, 20=45.89%, 50=4.71%, 100=10.08% 00:33:34.837 cpu : usr=95.90%, sys=3.89%, ctx=18, majf=0, minf=102 00:33:34.837 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:34.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.837 issued rwts: total=913,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.837 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:34.837 filename0: (groupid=0, jobs=1): err= 0: pid=1360765: Mon Apr 15 23:00:19 2024 00:33:34.837 read: IOPS=201, BW=25.2MiB/s (26.4MB/s)(126MiB/5007msec) 00:33:34.837 slat (nsec): min=5360, max=29522, avg=7360.52, stdev=1759.73 00:33:34.837 clat (usec): min=5436, max=92573, avg=14859.56, stdev=13557.89 00:33:34.837 lat (usec): min=5445, max=92579, avg=14866.92, stdev=13557.83 00:33:34.837 clat percentiles (usec): 00:33:34.837 | 1.00th=[ 6128], 5.00th=[ 6849], 10.00th=[ 7504], 20.00th=[ 8356], 00:33:34.837 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[11076], 00:33:34.837 | 70.00th=[11994], 80.00th=[13173], 90.00th=[47973], 95.00th=[50594], 00:33:34.837 | 99.00th=[53740], 99.50th=[56886], 99.90th=[90702], 99.95th=[92799], 00:33:34.837 | 99.99th=[92799] 00:33:34.837 bw ( KiB/s): min=11776, max=39936, per=36.84%, avg=25779.20, stdev=8700.69, samples=10 00:33:34.837 iops : min= 92, max= 312, avg=201.40, stdev=67.97, samples=10 00:33:34.837 lat (msec) : 10=45.84%, 20=42.87%, 50=4.55%, 100=6.73% 00:33:34.837 cpu : usr=95.92%, sys=3.86%, ctx=14, majf=0, minf=114 00:33:34.837 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:34.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.837 issued rwts: total=1010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.837 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:34.837 00:33:34.837 Run status group 0 (all jobs): 00:33:34.837 READ: bw=68.3MiB/s (71.7MB/s), 20.8MiB/s-25.2MiB/s (21.9MB/s-26.4MB/s), io=345MiB (361MB), run=5007-5045msec 00:33:34.837 23:00:19 -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:34.837 23:00:19 -- target/dif.sh@43 -- # local sub 00:33:34.837 23:00:19 -- target/dif.sh@45 -- # for sub in "$@" 00:33:34.837 23:00:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:34.837 23:00:19 -- target/dif.sh@36 -- # local sub_id=0 00:33:34.837 23:00:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:34.837 23:00:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:34.837 23:00:19 -- common/autotest_common.sh@10 -- # set +x 00:33:34.837 23:00:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:34.837 23:00:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:34.837 23:00:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:34.837 23:00:19 -- common/autotest_common.sh@10 -- # set +x 00:33:34.837 23:00:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:34.837 23:00:19 -- target/dif.sh@109 -- # NULL_DIF=2 00:33:34.837 23:00:19 -- target/dif.sh@109 -- # bs=4k 00:33:34.837 23:00:19 -- target/dif.sh@109 -- # numjobs=8 00:33:34.837 23:00:19 -- target/dif.sh@109 -- # iodepth=16 00:33:34.837 23:00:19 -- target/dif.sh@109 -- # runtime= 00:33:34.837 23:00:19 -- target/dif.sh@109 -- # files=2 00:33:34.837 23:00:19 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:34.837 23:00:19 -- target/dif.sh@28 -- # local sub 00:33:35.102 23:00:19 -- target/dif.sh@30 -- # for sub in "$@" 00:33:35.102 23:00:19 -- target/dif.sh@31 -- # create_subsystem 0 00:33:35.102 23:00:19 -- target/dif.sh@18 -- # local sub_id=0 00:33:35.102 23:00:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:35.102 23:00:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:35.102 23:00:19 -- common/autotest_common.sh@10 -- # set +x 00:33:35.102 bdev_null0 00:33:35.102 23:00:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:35.102 23:00:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:35.102 23:00:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:35.102 23:00:19 -- common/autotest_common.sh@10 -- # set +x 00:33:35.102 23:00:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:35.102 23:00:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:35.102 23:00:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:35.102 23:00:19 -- common/autotest_common.sh@10 -- # set +x 00:33:35.102 23:00:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:35.102 23:00:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:35.102 23:00:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:35.102 23:00:19 -- common/autotest_common.sh@10 -- # set +x 00:33:35.102 [2024-04-15 23:00:19.685661] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:35.102 23:00:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:35.102 23:00:19 -- target/dif.sh@30 -- # for sub in "$@" 00:33:35.102 23:00:19 -- target/dif.sh@31 -- # create_subsystem 1 00:33:35.102 23:00:19 -- target/dif.sh@18 -- # local sub_id=1 00:33:35.102 23:00:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:35.102 23:00:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:35.102 23:00:19 -- common/autotest_common.sh@10 -- # set +x 00:33:35.102 bdev_null1 00:33:35.102 23:00:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:35.102 23:00:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:35.102 23:00:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:35.102 23:00:19 -- common/autotest_common.sh@10 -- # set +x 00:33:35.102 23:00:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:35.102 23:00:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:35.102 23:00:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:35.102 23:00:19 -- common/autotest_common.sh@10 -- # set +x 00:33:35.102 23:00:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:35.102 23:00:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:35.102 23:00:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:35.102 23:00:19 -- common/autotest_common.sh@10 -- # set +x 00:33:35.102 23:00:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:35.102 23:00:19 -- target/dif.sh@30 -- # for sub in "$@" 00:33:35.102 23:00:19 -- target/dif.sh@31 -- # create_subsystem 2 00:33:35.102 23:00:19 -- target/dif.sh@18 -- # local sub_id=2 00:33:35.102 23:00:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:35.103 23:00:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:35.103 23:00:19 -- common/autotest_common.sh@10 -- # set +x 00:33:35.103 bdev_null2 00:33:35.103 23:00:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:35.103 23:00:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:35.103 23:00:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:35.103 23:00:19 -- common/autotest_common.sh@10 -- # set +x 00:33:35.103 23:00:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:35.103 23:00:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:35.103 23:00:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:35.103 23:00:19 -- common/autotest_common.sh@10 -- # set +x 00:33:35.103 23:00:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:35.103 23:00:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:35.103 23:00:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:35.103 23:00:19 -- common/autotest_common.sh@10 -- # set +x 00:33:35.103 23:00:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:35.103 23:00:19 -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:35.103 23:00:19 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:35.103 23:00:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:35.103 23:00:19 -- nvmf/common.sh@520 -- # config=() 00:33:35.103 23:00:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:35.103 23:00:19 -- nvmf/common.sh@520 -- # local subsystem config 00:33:35.103 23:00:19 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:35.103 23:00:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:35.103 23:00:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:35.103 { 00:33:35.103 "params": { 00:33:35.103 "name": "Nvme$subsystem", 00:33:35.103 "trtype": "$TEST_TRANSPORT", 00:33:35.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:35.103 "adrfam": "ipv4", 00:33:35.103 "trsvcid": "$NVMF_PORT", 00:33:35.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:35.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:35.103 "hdgst": ${hdgst:-false}, 00:33:35.103 "ddgst": ${ddgst:-false} 00:33:35.103 }, 00:33:35.103 "method": "bdev_nvme_attach_controller" 00:33:35.103 } 00:33:35.103 EOF 00:33:35.103 )") 00:33:35.103 23:00:19 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:35.103 23:00:19 -- target/dif.sh@82 -- # gen_fio_conf 00:33:35.103 23:00:19 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:35.103 23:00:19 -- target/dif.sh@54 -- # local file 00:33:35.103 23:00:19 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:35.103 23:00:19 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:35.103 23:00:19 -- target/dif.sh@56 -- # cat 00:33:35.103 23:00:19 -- common/autotest_common.sh@1320 -- # shift 00:33:35.103 23:00:19 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:35.103 23:00:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:35.103 23:00:19 -- nvmf/common.sh@542 -- # cat 00:33:35.103 23:00:19 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:35.103 23:00:19 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:35.103 23:00:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:35.103 23:00:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:35.103 23:00:19 -- target/dif.sh@72 -- # (( file <= files )) 00:33:35.103 23:00:19 -- target/dif.sh@73 -- # cat 00:33:35.103 23:00:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:35.103 23:00:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:35.103 { 00:33:35.103 "params": { 00:33:35.103 "name": "Nvme$subsystem", 00:33:35.103 "trtype": "$TEST_TRANSPORT", 00:33:35.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:35.103 "adrfam": "ipv4", 00:33:35.103 "trsvcid": "$NVMF_PORT", 00:33:35.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:35.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:35.103 "hdgst": ${hdgst:-false}, 00:33:35.103 "ddgst": ${ddgst:-false} 00:33:35.103 }, 00:33:35.103 "method": "bdev_nvme_attach_controller" 00:33:35.103 } 00:33:35.103 EOF 00:33:35.103 )") 00:33:35.103 23:00:19 -- target/dif.sh@72 -- # (( file++ )) 00:33:35.103 23:00:19 -- nvmf/common.sh@542 -- # cat 00:33:35.103 23:00:19 -- target/dif.sh@72 -- # (( file <= files )) 00:33:35.103 23:00:19 -- target/dif.sh@73 -- # cat 00:33:35.103 23:00:19 -- target/dif.sh@72 -- # (( file++ )) 00:33:35.103 23:00:19 -- target/dif.sh@72 -- # (( file <= files )) 00:33:35.103 23:00:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:35.103 23:00:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:35.103 { 00:33:35.103 "params": { 00:33:35.103 "name": "Nvme$subsystem", 00:33:35.103 "trtype": "$TEST_TRANSPORT", 00:33:35.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:35.103 "adrfam": "ipv4", 00:33:35.103 "trsvcid": "$NVMF_PORT", 00:33:35.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:35.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:35.103 "hdgst": ${hdgst:-false}, 00:33:35.103 "ddgst": ${ddgst:-false} 00:33:35.103 }, 00:33:35.103 "method": "bdev_nvme_attach_controller" 00:33:35.103 } 00:33:35.103 EOF 00:33:35.103 )") 00:33:35.103 23:00:19 -- nvmf/common.sh@542 -- # cat 00:33:35.103 23:00:19 -- nvmf/common.sh@544 -- # jq . 00:33:35.103 23:00:19 -- nvmf/common.sh@545 -- # IFS=, 00:33:35.103 23:00:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:35.103 "params": { 00:33:35.103 "name": "Nvme0", 00:33:35.103 "trtype": "tcp", 00:33:35.103 "traddr": "10.0.0.2", 00:33:35.103 "adrfam": "ipv4", 00:33:35.103 "trsvcid": "4420", 00:33:35.103 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:35.103 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:35.103 "hdgst": false, 00:33:35.103 "ddgst": false 00:33:35.103 }, 00:33:35.103 "method": "bdev_nvme_attach_controller" 00:33:35.103 },{ 00:33:35.103 "params": { 00:33:35.103 "name": "Nvme1", 00:33:35.103 "trtype": "tcp", 00:33:35.103 "traddr": "10.0.0.2", 00:33:35.103 "adrfam": "ipv4", 00:33:35.103 "trsvcid": "4420", 00:33:35.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:35.103 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:35.103 "hdgst": false, 00:33:35.103 "ddgst": false 00:33:35.103 }, 00:33:35.103 "method": "bdev_nvme_attach_controller" 00:33:35.103 },{ 00:33:35.103 "params": { 00:33:35.103 "name": "Nvme2", 00:33:35.103 "trtype": "tcp", 00:33:35.103 "traddr": "10.0.0.2", 00:33:35.103 "adrfam": "ipv4", 00:33:35.103 "trsvcid": "4420", 00:33:35.103 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:35.103 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:35.103 "hdgst": false, 00:33:35.103 "ddgst": false 00:33:35.103 }, 00:33:35.103 "method": "bdev_nvme_attach_controller" 00:33:35.103 }' 00:33:35.103 23:00:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:35.103 23:00:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:35.103 23:00:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:35.103 23:00:19 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:35.103 23:00:19 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:35.103 23:00:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:35.103 23:00:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:35.103 23:00:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:35.103 23:00:19 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:35.103 23:00:19 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:35.673 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:35.673 ... 00:33:35.673 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:35.673 ... 00:33:35.673 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:35.673 ... 00:33:35.673 fio-3.35 00:33:35.673 Starting 24 threads 00:33:35.673 EAL: No free 2048 kB hugepages reported on node 1 00:33:36.248 [2024-04-15 23:00:20.827275] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:36.248 [2024-04-15 23:00:20.827323] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:46.288 00:33:46.288 filename0: (groupid=0, jobs=1): err= 0: pid=1362061: Mon Apr 15 23:00:31 2024 00:33:46.288 read: IOPS=525, BW=2104KiB/s (2154kB/s)(20.6MiB/10008msec) 00:33:46.288 slat (usec): min=5, max=112, avg=26.70, stdev=15.87 00:33:46.288 clat (usec): min=17459, max=44703, avg=30186.18, stdev=1032.41 00:33:46.288 lat (usec): min=17486, max=44712, avg=30212.88, stdev=1031.27 00:33:46.288 clat percentiles (usec): 00:33:46.288 | 1.00th=[28443], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:33:46.288 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:46.288 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:33:46.288 | 99.00th=[31851], 99.50th=[35390], 99.90th=[44827], 99.95th=[44827], 00:33:46.288 | 99.99th=[44827] 00:33:46.288 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2101.89, stdev=64.93, samples=19 00:33:46.288 iops : min= 512, max= 544, avg=525.47, stdev=16.23, samples=19 00:33:46.288 lat (msec) : 20=0.11%, 50=99.89% 00:33:46.288 cpu : usr=99.24%, sys=0.49%, ctx=13, majf=0, minf=9 00:33:46.288 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:46.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.288 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.288 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.288 filename0: (groupid=0, jobs=1): err= 0: pid=1362062: Mon Apr 15 23:00:31 2024 00:33:46.288 read: IOPS=527, BW=2111KiB/s (2162kB/s)(20.6MiB/10003msec) 00:33:46.288 slat (usec): min=5, max=105, avg=24.57, stdev=17.99 00:33:46.288 clat (usec): min=6695, max=56973, avg=30088.23, stdev=1800.19 00:33:46.288 lat (usec): min=6701, max=56996, avg=30112.80, stdev=1801.08 00:33:46.288 clat percentiles (usec): 00:33:46.288 | 1.00th=[28181], 5.00th=[28967], 10.00th=[29492], 20.00th=[29754], 00:33:46.288 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:46.288 | 70.00th=[30540], 80.00th=[30540], 90.00th=[31065], 95.00th=[31065], 00:33:46.288 | 99.00th=[31851], 99.50th=[32375], 99.90th=[40633], 99.95th=[41681], 00:33:46.288 | 99.99th=[56886] 00:33:46.288 bw ( KiB/s): min= 2036, max= 2176, per=4.18%, avg=2102.11, stdev=60.14, samples=19 00:33:46.288 iops : min= 509, max= 544, avg=525.53, stdev=15.03, samples=19 00:33:46.288 lat (msec) : 10=0.30%, 20=0.34%, 50=99.32%, 100=0.04% 00:33:46.288 cpu : usr=97.82%, sys=1.10%, ctx=153, majf=0, minf=9 00:33:46.288 IO depths : 1=2.3%, 2=8.5%, 4=25.0%, 8=54.0%, 16=10.2%, 32=0.0%, >=64=0.0% 00:33:46.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.288 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.288 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.288 filename0: (groupid=0, jobs=1): err= 0: pid=1362063: Mon Apr 15 23:00:31 2024 00:33:46.288 read: IOPS=530, BW=2120KiB/s (2171kB/s)(20.7MiB/10009msec) 00:33:46.288 slat (usec): min=5, max=153, avg=31.07, stdev=21.74 00:33:46.288 clat (usec): min=12791, max=44343, avg=29889.34, stdev=2136.51 00:33:46.288 lat (usec): min=12801, max=44350, avg=29920.41, stdev=2138.02 00:33:46.288 clat percentiles (usec): 00:33:46.288 | 1.00th=[20579], 5.00th=[28181], 10.00th=[29230], 20.00th=[29492], 00:33:46.288 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:46.288 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31327], 00:33:46.288 | 99.00th=[36439], 99.50th=[40109], 99.90th=[42730], 99.95th=[44303], 00:33:46.288 | 99.99th=[44303] 00:33:46.288 bw ( KiB/s): min= 2048, max= 2320, per=4.21%, avg=2119.79, stdev=84.04, samples=19 00:33:46.288 iops : min= 512, max= 580, avg=529.95, stdev=21.01, samples=19 00:33:46.288 lat (msec) : 20=0.87%, 50=99.13% 00:33:46.288 cpu : usr=97.80%, sys=1.20%, ctx=41, majf=0, minf=9 00:33:46.288 IO depths : 1=5.2%, 2=10.5%, 4=22.0%, 8=54.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:33:46.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.288 complete : 0=0.0%, 4=93.4%, 8=1.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.288 issued rwts: total=5306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.288 filename0: (groupid=0, jobs=1): err= 0: pid=1362064: Mon Apr 15 23:00:31 2024 00:33:46.288 read: IOPS=519, BW=2076KiB/s (2126kB/s)(20.3MiB/10022msec) 00:33:46.288 slat (usec): min=5, max=124, avg=20.78, stdev=20.13 00:33:46.288 clat (usec): min=7558, max=55877, avg=30663.44, stdev=4479.26 00:33:46.288 lat (usec): min=7567, max=55890, avg=30684.22, stdev=4479.01 00:33:46.288 clat percentiles (usec): 00:33:46.288 | 1.00th=[16188], 5.00th=[25035], 10.00th=[28967], 20.00th=[29754], 00:33:46.288 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:33:46.288 | 70.00th=[30540], 80.00th=[31065], 90.00th=[32637], 95.00th=[39060], 00:33:46.288 | 99.00th=[49546], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:33:46.288 | 99.99th=[55837] 00:33:46.288 bw ( KiB/s): min= 1923, max= 2176, per=4.12%, avg=2076.95, stdev=61.59, samples=20 00:33:46.288 iops : min= 480, max= 544, avg=519.20, stdev=15.50, samples=20 00:33:46.288 lat (msec) : 10=0.15%, 20=2.10%, 50=96.91%, 100=0.85% 00:33:46.288 cpu : usr=97.72%, sys=1.28%, ctx=47, majf=0, minf=9 00:33:46.288 IO depths : 1=1.2%, 2=2.9%, 4=10.0%, 8=72.0%, 16=13.9%, 32=0.0%, >=64=0.0% 00:33:46.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.288 complete : 0=0.0%, 4=91.2%, 8=5.0%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.288 issued rwts: total=5202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.288 filename0: (groupid=0, jobs=1): err= 0: pid=1362065: Mon Apr 15 23:00:31 2024 00:33:46.288 read: IOPS=522, BW=2091KiB/s (2141kB/s)(20.4MiB/10011msec) 00:33:46.288 slat (usec): min=5, max=124, avg=33.90, stdev=21.92 00:33:46.288 clat (usec): min=15512, max=57190, avg=30270.64, stdev=2159.83 00:33:46.288 lat (usec): min=15518, max=57219, avg=30304.55, stdev=2159.45 00:33:46.288 clat percentiles (usec): 00:33:46.288 | 1.00th=[26608], 5.00th=[28967], 10.00th=[29230], 20.00th=[29754], 00:33:46.288 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:46.288 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31327], 00:33:46.288 | 99.00th=[39584], 99.50th=[43779], 99.90th=[54789], 99.95th=[54789], 00:33:46.288 | 99.99th=[57410] 00:33:46.288 bw ( KiB/s): min= 2032, max= 2176, per=4.15%, avg=2090.95, stdev=60.59, samples=19 00:33:46.288 iops : min= 508, max= 544, avg=522.74, stdev=15.15, samples=19 00:33:46.288 lat (msec) : 20=0.31%, 50=99.54%, 100=0.15% 00:33:46.288 cpu : usr=99.27%, sys=0.42%, ctx=73, majf=0, minf=9 00:33:46.288 IO depths : 1=5.8%, 2=12.0%, 4=24.7%, 8=50.8%, 16=6.7%, 32=0.0%, >=64=0.0% 00:33:46.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.288 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.288 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.288 filename0: (groupid=0, jobs=1): err= 0: pid=1362066: Mon Apr 15 23:00:31 2024 00:33:46.288 read: IOPS=516, BW=2066KiB/s (2115kB/s)(20.2MiB/10008msec) 00:33:46.288 slat (usec): min=5, max=106, avg=11.02, stdev= 9.27 00:33:46.288 clat (usec): min=8164, max=55395, avg=30907.42, stdev=5310.36 00:33:46.288 lat (usec): min=8173, max=55404, avg=30918.44, stdev=5311.07 00:33:46.288 clat percentiles (usec): 00:33:46.288 | 1.00th=[16712], 5.00th=[21890], 10.00th=[27657], 20.00th=[29754], 00:33:46.288 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:46.288 | 70.00th=[30802], 80.00th=[31327], 90.00th=[34866], 95.00th=[41157], 00:33:46.288 | 99.00th=[52167], 99.50th=[52691], 99.90th=[55313], 99.95th=[55313], 00:33:46.288 | 99.99th=[55313] 00:33:46.288 bw ( KiB/s): min= 1792, max= 2304, per=4.11%, avg=2068.21, stdev=97.19, samples=19 00:33:46.288 iops : min= 448, max= 576, avg=517.05, stdev=24.30, samples=19 00:33:46.288 lat (msec) : 10=0.08%, 20=2.84%, 50=95.55%, 100=1.53% 00:33:46.288 cpu : usr=98.85%, sys=0.85%, ctx=21, majf=0, minf=9 00:33:46.288 IO depths : 1=1.0%, 2=2.5%, 4=10.6%, 8=72.5%, 16=13.4%, 32=0.0%, >=64=0.0% 00:33:46.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.288 complete : 0=0.0%, 4=91.1%, 8=5.0%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.288 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.288 filename0: (groupid=0, jobs=1): err= 0: pid=1362067: Mon Apr 15 23:00:31 2024 00:33:46.288 read: IOPS=527, BW=2109KiB/s (2160kB/s)(20.6MiB/10006msec) 00:33:46.288 slat (usec): min=5, max=119, avg=30.62, stdev=22.05 00:33:46.288 clat (usec): min=11194, max=50737, avg=30065.79, stdev=2261.37 00:33:46.288 lat (usec): min=11203, max=50758, avg=30096.41, stdev=2262.22 00:33:46.288 clat percentiles (usec): 00:33:46.289 | 1.00th=[19006], 5.00th=[28967], 10.00th=[29230], 20.00th=[29754], 00:33:46.289 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:46.289 | 70.00th=[30540], 80.00th=[30540], 90.00th=[31065], 95.00th=[31327], 00:33:46.289 | 99.00th=[36963], 99.50th=[41681], 99.90th=[50594], 99.95th=[50594], 00:33:46.289 | 99.99th=[50594] 00:33:46.289 bw ( KiB/s): min= 1920, max= 2352, per=4.18%, avg=2106.95, stdev=97.77, samples=19 00:33:46.289 iops : min= 480, max= 588, avg=526.74, stdev=24.44, samples=19 00:33:46.289 lat (msec) : 20=1.02%, 50=98.67%, 100=0.30% 00:33:46.289 cpu : usr=99.05%, sys=0.59%, ctx=73, majf=0, minf=9 00:33:46.289 IO depths : 1=5.7%, 2=11.6%, 4=24.0%, 8=51.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:33:46.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.289 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.289 issued rwts: total=5276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.289 filename0: (groupid=0, jobs=1): err= 0: pid=1362068: Mon Apr 15 23:00:31 2024 00:33:46.289 read: IOPS=533, BW=2134KiB/s (2185kB/s)(20.8MiB/10003msec) 00:33:46.289 slat (usec): min=5, max=119, avg=19.43, stdev=14.75 00:33:46.289 clat (usec): min=11229, max=50610, avg=29846.51, stdev=3104.94 00:33:46.289 lat (usec): min=11235, max=50619, avg=29865.94, stdev=3106.69 00:33:46.289 clat percentiles (usec): 00:33:46.289 | 1.00th=[17695], 5.00th=[23725], 10.00th=[28967], 20.00th=[29492], 00:33:46.289 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:46.289 | 70.00th=[30540], 80.00th=[30540], 90.00th=[31065], 95.00th=[31589], 00:33:46.289 | 99.00th=[40633], 99.50th=[43254], 99.90th=[50594], 99.95th=[50594], 00:33:46.289 | 99.99th=[50594] 00:33:46.289 bw ( KiB/s): min= 2048, max= 2272, per=4.24%, avg=2132.21, stdev=72.12, samples=19 00:33:46.289 iops : min= 512, max= 568, avg=533.05, stdev=18.03, samples=19 00:33:46.289 lat (msec) : 20=2.31%, 50=97.47%, 100=0.22% 00:33:46.289 cpu : usr=99.17%, sys=0.56%, ctx=14, majf=0, minf=9 00:33:46.289 IO depths : 1=3.6%, 2=8.7%, 4=21.0%, 8=57.3%, 16=9.4%, 32=0.0%, >=64=0.0% 00:33:46.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.289 complete : 0=0.0%, 4=93.2%, 8=1.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.289 issued rwts: total=5336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.289 filename1: (groupid=0, jobs=1): err= 0: pid=1362069: Mon Apr 15 23:00:31 2024 00:33:46.289 read: IOPS=525, BW=2104KiB/s (2154kB/s)(20.6MiB/10009msec) 00:33:46.289 slat (usec): min=5, max=113, avg=31.90, stdev=19.92 00:33:46.289 clat (usec): min=17584, max=51644, avg=30123.06, stdev=1590.79 00:33:46.289 lat (usec): min=17590, max=51651, avg=30154.96, stdev=1590.68 00:33:46.289 clat percentiles (usec): 00:33:46.289 | 1.00th=[23987], 5.00th=[29230], 10.00th=[29230], 20.00th=[29754], 00:33:46.289 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:46.289 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31327], 00:33:46.289 | 99.00th=[34866], 99.50th=[39584], 99.90th=[42206], 99.95th=[42206], 00:33:46.289 | 99.99th=[51643] 00:33:46.289 bw ( KiB/s): min= 2048, max= 2176, per=4.18%, avg=2102.11, stdev=62.06, samples=19 00:33:46.289 iops : min= 512, max= 544, avg=525.53, stdev=15.51, samples=19 00:33:46.289 lat (msec) : 20=0.57%, 50=99.39%, 100=0.04% 00:33:46.289 cpu : usr=99.07%, sys=0.58%, ctx=71, majf=0, minf=9 00:33:46.289 IO depths : 1=5.9%, 2=11.8%, 4=24.4%, 8=51.2%, 16=6.7%, 32=0.0%, >=64=0.0% 00:33:46.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.289 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.289 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.289 filename1: (groupid=0, jobs=1): err= 0: pid=1362070: Mon Apr 15 23:00:31 2024 00:33:46.289 read: IOPS=513, BW=2053KiB/s (2102kB/s)(20.1MiB/10013msec) 00:33:46.289 slat (usec): min=5, max=120, avg=14.83, stdev=14.53 00:33:46.289 clat (usec): min=7731, max=57170, avg=31097.04, stdev=6188.58 00:33:46.289 lat (usec): min=7741, max=57175, avg=31111.87, stdev=6189.12 00:33:46.289 clat percentiles (usec): 00:33:46.289 | 1.00th=[15926], 5.00th=[20317], 10.00th=[24249], 20.00th=[29492], 00:33:46.289 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:33:46.289 | 70.00th=[31065], 80.00th=[32900], 90.00th=[39584], 95.00th=[43779], 00:33:46.289 | 99.00th=[51119], 99.50th=[52167], 99.90th=[53216], 99.95th=[53216], 00:33:46.289 | 99.99th=[57410] 00:33:46.289 bw ( KiB/s): min= 1968, max= 2264, per=4.08%, avg=2053.26, stdev=75.85, samples=19 00:33:46.289 iops : min= 492, max= 566, avg=513.32, stdev=18.96, samples=19 00:33:46.289 lat (msec) : 10=0.29%, 20=4.44%, 50=93.77%, 100=1.50% 00:33:46.289 cpu : usr=99.35%, sys=0.38%, ctx=14, majf=0, minf=9 00:33:46.289 IO depths : 1=0.6%, 2=1.3%, 4=7.4%, 8=76.2%, 16=14.6%, 32=0.0%, >=64=0.0% 00:33:46.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.289 complete : 0=0.0%, 4=90.1%, 8=6.9%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.289 issued rwts: total=5138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.289 filename1: (groupid=0, jobs=1): err= 0: pid=1362072: Mon Apr 15 23:00:31 2024 00:33:46.289 read: IOPS=524, BW=2100KiB/s (2150kB/s)(20.5MiB/10008msec) 00:33:46.289 slat (nsec): min=5358, max=96612, avg=13783.28, stdev=10116.42 00:33:46.289 clat (usec): min=11149, max=52700, avg=30366.88, stdev=3412.34 00:33:46.289 lat (usec): min=11155, max=52706, avg=30380.66, stdev=3413.07 00:33:46.289 clat percentiles (usec): 00:33:46.289 | 1.00th=[19268], 5.00th=[27395], 10.00th=[29230], 20.00th=[29754], 00:33:46.289 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:33:46.289 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[32900], 00:33:46.289 | 99.00th=[49546], 99.50th=[50594], 99.90th=[52691], 99.95th=[52691], 00:33:46.289 | 99.99th=[52691] 00:33:46.289 bw ( KiB/s): min= 2000, max= 2224, per=4.16%, avg=2095.16, stdev=72.24, samples=19 00:33:46.289 iops : min= 500, max= 556, avg=523.79, stdev=18.06, samples=19 00:33:46.289 lat (msec) : 20=1.71%, 50=97.70%, 100=0.59% 00:33:46.289 cpu : usr=98.97%, sys=0.71%, ctx=14, majf=0, minf=9 00:33:46.289 IO depths : 1=4.9%, 2=10.1%, 4=22.1%, 8=55.2%, 16=7.7%, 32=0.0%, >=64=0.0% 00:33:46.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.289 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.289 issued rwts: total=5254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.289 filename1: (groupid=0, jobs=1): err= 0: pid=1362073: Mon Apr 15 23:00:31 2024 00:33:46.289 read: IOPS=522, BW=2090KiB/s (2140kB/s)(20.4MiB/10014msec) 00:33:46.289 slat (usec): min=5, max=123, avg=30.11, stdev=21.80 00:33:46.289 clat (usec): min=14257, max=60375, avg=30356.76, stdev=2246.29 00:33:46.289 lat (usec): min=14266, max=60407, avg=30386.87, stdev=2244.66 00:33:46.289 clat percentiles (usec): 00:33:46.289 | 1.00th=[23725], 5.00th=[28967], 10.00th=[29230], 20.00th=[29754], 00:33:46.289 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:46.289 | 70.00th=[30540], 80.00th=[30540], 90.00th=[31065], 95.00th=[31851], 00:33:46.289 | 99.00th=[41681], 99.50th=[43254], 99.90th=[45351], 99.95th=[50070], 00:33:46.289 | 99.99th=[60556] 00:33:46.289 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2088.63, stdev=72.10, samples=19 00:33:46.289 iops : min= 480, max= 544, avg=522.16, stdev=18.03, samples=19 00:33:46.289 lat (msec) : 20=0.42%, 50=99.50%, 100=0.08% 00:33:46.289 cpu : usr=98.06%, sys=1.01%, ctx=71, majf=0, minf=9 00:33:46.289 IO depths : 1=3.8%, 2=9.5%, 4=23.8%, 8=54.1%, 16=8.8%, 32=0.0%, >=64=0.0% 00:33:46.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.289 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.289 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.289 filename1: (groupid=0, jobs=1): err= 0: pid=1362074: Mon Apr 15 23:00:31 2024 00:33:46.289 read: IOPS=528, BW=2114KiB/s (2165kB/s)(20.6MiB/10002msec) 00:33:46.289 slat (usec): min=5, max=101, avg=20.41, stdev=14.84 00:33:46.289 clat (usec): min=6750, max=52747, avg=30080.94, stdev=2664.72 00:33:46.289 lat (usec): min=6757, max=52770, avg=30101.35, stdev=2665.38 00:33:46.289 clat percentiles (usec): 00:33:46.289 | 1.00th=[18220], 5.00th=[28967], 10.00th=[29230], 20.00th=[29754], 00:33:46.289 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:46.289 | 70.00th=[30540], 80.00th=[30540], 90.00th=[31065], 95.00th=[31327], 00:33:46.289 | 99.00th=[33817], 99.50th=[44827], 99.90th=[52691], 99.95th=[52691], 00:33:46.289 | 99.99th=[52691] 00:33:46.289 bw ( KiB/s): min= 1920, max= 2224, per=4.18%, avg=2104.42, stdev=79.71, samples=19 00:33:46.289 iops : min= 480, max= 556, avg=526.11, stdev=19.93, samples=19 00:33:46.289 lat (msec) : 10=0.30%, 20=0.87%, 50=98.52%, 100=0.30% 00:33:46.289 cpu : usr=98.76%, sys=0.62%, ctx=48, majf=0, minf=9 00:33:46.289 IO depths : 1=5.7%, 2=11.8%, 4=24.6%, 8=51.0%, 16=6.8%, 32=0.0%, >=64=0.0% 00:33:46.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.289 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.289 issued rwts: total=5286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.289 filename1: (groupid=0, jobs=1): err= 0: pid=1362075: Mon Apr 15 23:00:31 2024 00:33:46.289 read: IOPS=525, BW=2104KiB/s (2154kB/s)(20.6MiB/10008msec) 00:33:46.289 slat (usec): min=5, max=121, avg=26.37, stdev=22.27 00:33:46.289 clat (usec): min=17790, max=44632, avg=30198.33, stdev=1614.60 00:33:46.289 lat (usec): min=17799, max=44681, avg=30224.70, stdev=1613.46 00:33:46.289 clat percentiles (usec): 00:33:46.289 | 1.00th=[24249], 5.00th=[28967], 10.00th=[29230], 20.00th=[29754], 00:33:46.289 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:46.289 | 70.00th=[30540], 80.00th=[30540], 90.00th=[31065], 95.00th=[31589], 00:33:46.289 | 99.00th=[34341], 99.50th=[40633], 99.90th=[43779], 99.95th=[44827], 00:33:46.289 | 99.99th=[44827] 00:33:46.289 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2101.89, stdev=63.38, samples=19 00:33:46.289 iops : min= 512, max= 544, avg=525.47, stdev=15.84, samples=19 00:33:46.289 lat (msec) : 20=0.49%, 50=99.51% 00:33:46.289 cpu : usr=99.38%, sys=0.34%, ctx=12, majf=0, minf=9 00:33:46.289 IO depths : 1=3.0%, 2=9.2%, 4=24.8%, 8=53.5%, 16=9.5%, 32=0.0%, >=64=0.0% 00:33:46.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.290 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.290 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.290 filename1: (groupid=0, jobs=1): err= 0: pid=1362076: Mon Apr 15 23:00:31 2024 00:33:46.290 read: IOPS=529, BW=2120KiB/s (2171kB/s)(20.7MiB/10001msec) 00:33:46.290 slat (usec): min=5, max=146, avg=25.38, stdev=21.46 00:33:46.290 clat (usec): min=9023, max=52608, avg=29973.71, stdev=4208.41 00:33:46.290 lat (usec): min=9029, max=52632, avg=29999.08, stdev=4210.06 00:33:46.290 clat percentiles (usec): 00:33:46.290 | 1.00th=[16188], 5.00th=[21627], 10.00th=[28443], 20.00th=[29492], 00:33:46.290 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:46.290 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31589], 95.00th=[34866], 00:33:46.290 | 99.00th=[47449], 99.50th=[48497], 99.90th=[52691], 99.95th=[52691], 00:33:46.290 | 99.99th=[52691] 00:33:46.290 bw ( KiB/s): min= 1891, max= 2208, per=4.17%, avg=2100.37, stdev=85.40, samples=19 00:33:46.290 iops : min= 472, max= 552, avg=525.05, stdev=21.45, samples=19 00:33:46.290 lat (msec) : 10=0.30%, 20=2.62%, 50=96.77%, 100=0.30% 00:33:46.290 cpu : usr=98.00%, sys=0.97%, ctx=113, majf=0, minf=9 00:33:46.290 IO depths : 1=3.2%, 2=7.1%, 4=16.6%, 8=62.4%, 16=10.7%, 32=0.0%, >=64=0.0% 00:33:46.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.290 complete : 0=0.0%, 4=92.2%, 8=3.5%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.290 issued rwts: total=5300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.290 filename1: (groupid=0, jobs=1): err= 0: pid=1362077: Mon Apr 15 23:00:31 2024 00:33:46.290 read: IOPS=526, BW=2106KiB/s (2157kB/s)(20.6MiB/10002msec) 00:33:46.290 slat (nsec): min=5357, max=86339, avg=15182.73, stdev=9755.58 00:33:46.290 clat (usec): min=3498, max=56746, avg=30274.01, stdev=2425.08 00:33:46.290 lat (usec): min=3505, max=56768, avg=30289.20, stdev=2425.20 00:33:46.290 clat percentiles (usec): 00:33:46.290 | 1.00th=[23462], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:33:46.290 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:46.290 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31589], 00:33:46.290 | 99.00th=[34866], 99.50th=[42730], 99.90th=[53216], 99.95th=[53216], 00:33:46.290 | 99.99th=[56886] 00:33:46.290 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2093.47, stdev=65.59, samples=19 00:33:46.290 iops : min= 480, max= 544, avg=523.37, stdev=16.40, samples=19 00:33:46.290 lat (msec) : 4=0.11%, 10=0.30%, 20=0.15%, 50=99.13%, 100=0.30% 00:33:46.290 cpu : usr=98.22%, sys=1.00%, ctx=126, majf=0, minf=9 00:33:46.290 IO depths : 1=0.5%, 2=6.4%, 4=23.7%, 8=57.1%, 16=12.2%, 32=0.0%, >=64=0.0% 00:33:46.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.290 complete : 0=0.0%, 4=94.1%, 8=0.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.290 issued rwts: total=5266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.290 filename2: (groupid=0, jobs=1): err= 0: pid=1362078: Mon Apr 15 23:00:31 2024 00:33:46.290 read: IOPS=503, BW=2015KiB/s (2063kB/s)(19.7MiB/10004msec) 00:33:46.290 slat (nsec): min=5507, max=99014, avg=15128.19, stdev=13820.04 00:33:46.290 clat (usec): min=6729, max=60201, avg=31685.63, stdev=5825.82 00:33:46.290 lat (usec): min=6735, max=60210, avg=31700.76, stdev=5825.35 00:33:46.290 clat percentiles (usec): 00:33:46.290 | 1.00th=[13435], 5.00th=[25822], 10.00th=[29230], 20.00th=[30016], 00:33:46.290 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:46.290 | 70.00th=[31065], 80.00th=[32113], 90.00th=[39584], 95.00th=[44303], 00:33:46.290 | 99.00th=[51643], 99.50th=[54789], 99.90th=[60031], 99.95th=[60031], 00:33:46.290 | 99.99th=[60031] 00:33:46.290 bw ( KiB/s): min= 1840, max= 2176, per=4.00%, avg=2012.63, stdev=75.77, samples=19 00:33:46.290 iops : min= 460, max= 544, avg=503.16, stdev=18.94, samples=19 00:33:46.290 lat (msec) : 10=0.40%, 20=2.24%, 50=95.93%, 100=1.43% 00:33:46.290 cpu : usr=99.14%, sys=0.57%, ctx=17, majf=0, minf=9 00:33:46.290 IO depths : 1=0.1%, 2=0.8%, 4=7.1%, 8=76.8%, 16=15.3%, 32=0.0%, >=64=0.0% 00:33:46.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.290 complete : 0=0.0%, 4=90.4%, 8=6.6%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.290 issued rwts: total=5039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.290 filename2: (groupid=0, jobs=1): err= 0: pid=1362079: Mon Apr 15 23:00:31 2024 00:33:46.290 read: IOPS=525, BW=2100KiB/s (2150kB/s)(20.5MiB/10013msec) 00:33:46.290 slat (usec): min=5, max=109, avg=25.18, stdev=18.93 00:33:46.290 clat (usec): min=13223, max=50759, avg=30264.34, stdev=3032.25 00:33:46.290 lat (usec): min=13252, max=50793, avg=30289.52, stdev=3032.00 00:33:46.290 clat percentiles (usec): 00:33:46.290 | 1.00th=[20055], 5.00th=[28443], 10.00th=[29230], 20.00th=[29754], 00:33:46.290 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:46.290 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[32113], 00:33:46.290 | 99.00th=[45351], 99.50th=[49021], 99.90th=[50594], 99.95th=[50594], 00:33:46.290 | 99.99th=[50594] 00:33:46.290 bw ( KiB/s): min= 1968, max= 2176, per=4.17%, avg=2100.42, stdev=67.47, samples=19 00:33:46.290 iops : min= 492, max= 544, avg=525.11, stdev=16.87, samples=19 00:33:46.290 lat (msec) : 20=1.12%, 50=98.54%, 100=0.34% 00:33:46.290 cpu : usr=99.24%, sys=0.47%, ctx=34, majf=0, minf=9 00:33:46.290 IO depths : 1=4.6%, 2=9.7%, 4=22.5%, 8=55.1%, 16=8.1%, 32=0.0%, >=64=0.0% 00:33:46.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.290 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.290 issued rwts: total=5257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.290 filename2: (groupid=0, jobs=1): err= 0: pid=1362080: Mon Apr 15 23:00:31 2024 00:33:46.290 read: IOPS=528, BW=2114KiB/s (2164kB/s)(20.6MiB/10003msec) 00:33:46.290 slat (usec): min=5, max=107, avg=21.18, stdev=17.78 00:33:46.290 clat (usec): min=16230, max=42225, avg=30108.15, stdev=1443.33 00:33:46.290 lat (usec): min=16236, max=42243, avg=30129.32, stdev=1443.36 00:33:46.290 clat percentiles (usec): 00:33:46.290 | 1.00th=[23200], 5.00th=[28967], 10.00th=[29492], 20.00th=[29754], 00:33:46.290 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:46.290 | 70.00th=[30540], 80.00th=[30540], 90.00th=[31065], 95.00th=[31327], 00:33:46.290 | 99.00th=[32375], 99.50th=[32900], 99.90th=[42206], 99.95th=[42206], 00:33:46.290 | 99.99th=[42206] 00:33:46.290 bw ( KiB/s): min= 2048, max= 2352, per=4.19%, avg=2111.16, stdev=85.41, samples=19 00:33:46.290 iops : min= 512, max= 588, avg=527.79, stdev=21.35, samples=19 00:33:46.290 lat (msec) : 20=0.53%, 50=99.47% 00:33:46.290 cpu : usr=99.39%, sys=0.32%, ctx=60, majf=0, minf=9 00:33:46.290 IO depths : 1=5.5%, 2=11.6%, 4=24.6%, 8=51.3%, 16=7.0%, 32=0.0%, >=64=0.0% 00:33:46.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.290 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.290 issued rwts: total=5286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.290 filename2: (groupid=0, jobs=1): err= 0: pid=1362081: Mon Apr 15 23:00:31 2024 00:33:46.290 read: IOPS=522, BW=2091KiB/s (2141kB/s)(20.4MiB/10012msec) 00:33:46.290 slat (nsec): min=5361, max=83790, avg=15129.09, stdev=11825.06 00:33:46.290 clat (usec): min=9084, max=56924, avg=30486.66, stdev=3173.27 00:33:46.290 lat (usec): min=9094, max=56944, avg=30501.79, stdev=3173.37 00:33:46.290 clat percentiles (usec): 00:33:46.290 | 1.00th=[18744], 5.00th=[28967], 10.00th=[29492], 20.00th=[29754], 00:33:46.290 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:46.290 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31327], 95.00th=[32375], 00:33:46.290 | 99.00th=[44303], 99.50th=[49021], 99.90th=[51643], 99.95th=[56886], 00:33:46.290 | 99.99th=[56886] 00:33:46.290 bw ( KiB/s): min= 2000, max= 2176, per=4.15%, avg=2087.79, stdev=57.88, samples=19 00:33:46.290 iops : min= 500, max= 544, avg=521.95, stdev=14.47, samples=19 00:33:46.290 lat (msec) : 10=0.15%, 20=1.09%, 50=98.32%, 100=0.44% 00:33:46.290 cpu : usr=96.50%, sys=1.80%, ctx=91, majf=0, minf=9 00:33:46.290 IO depths : 1=2.8%, 2=5.9%, 4=14.7%, 8=64.5%, 16=12.1%, 32=0.0%, >=64=0.0% 00:33:46.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.290 complete : 0=0.0%, 4=92.1%, 8=4.4%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.290 issued rwts: total=5234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.290 filename2: (groupid=0, jobs=1): err= 0: pid=1362082: Mon Apr 15 23:00:31 2024 00:33:46.290 read: IOPS=542, BW=2171KiB/s (2224kB/s)(21.3MiB/10032msec) 00:33:46.290 slat (usec): min=4, max=125, avg=24.90, stdev=21.29 00:33:46.290 clat (usec): min=2074, max=59137, avg=29242.39, stdev=6318.43 00:33:46.290 lat (usec): min=2082, max=59166, avg=29267.29, stdev=6321.67 00:33:46.290 clat percentiles (usec): 00:33:46.290 | 1.00th=[ 2999], 5.00th=[19006], 10.00th=[22414], 20.00th=[28705], 00:33:46.290 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:46.290 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31589], 95.00th=[37487], 00:33:46.290 | 99.00th=[49021], 99.50th=[53740], 99.90th=[58983], 99.95th=[58983], 00:33:46.290 | 99.99th=[58983] 00:33:46.290 bw ( KiB/s): min= 2000, max= 2992, per=4.32%, avg=2176.00, stdev=220.48, samples=20 00:33:46.290 iops : min= 500, max= 748, avg=544.00, stdev=55.12, samples=20 00:33:46.290 lat (msec) : 4=1.89%, 10=0.46%, 20=3.51%, 50=93.55%, 100=0.59% 00:33:46.290 cpu : usr=97.20%, sys=1.59%, ctx=130, majf=0, minf=9 00:33:46.290 IO depths : 1=1.8%, 2=4.0%, 4=14.5%, 8=67.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:33:46.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.290 complete : 0=0.0%, 4=91.9%, 8=3.5%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.290 issued rwts: total=5446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.290 filename2: (groupid=0, jobs=1): err= 0: pid=1362083: Mon Apr 15 23:00:31 2024 00:33:46.290 read: IOPS=536, BW=2145KiB/s (2197kB/s)(21.0MiB/10024msec) 00:33:46.290 slat (nsec): min=2872, max=63105, avg=9238.58, stdev=6457.90 00:33:46.290 clat (usec): min=881, max=35374, avg=29753.68, stdev=3928.47 00:33:46.290 lat (usec): min=887, max=35383, avg=29762.92, stdev=3928.90 00:33:46.290 clat percentiles (usec): 00:33:46.290 | 1.00th=[ 2999], 5.00th=[28967], 10.00th=[29492], 20.00th=[29754], 00:33:46.290 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:33:46.291 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:33:46.291 | 99.00th=[32375], 99.50th=[32637], 99.90th=[34866], 99.95th=[34866], 00:33:46.291 | 99.99th=[35390] 00:33:46.291 bw ( KiB/s): min= 2048, max= 2816, per=4.26%, avg=2144.60, stdev=170.56, samples=20 00:33:46.291 iops : min= 512, max= 704, avg=536.15, stdev=42.64, samples=20 00:33:46.291 lat (usec) : 1000=0.04% 00:33:46.291 lat (msec) : 2=0.13%, 4=1.62%, 10=0.30%, 50=97.92% 00:33:46.291 cpu : usr=99.07%, sys=0.59%, ctx=144, majf=0, minf=9 00:33:46.291 IO depths : 1=5.9%, 2=12.0%, 4=24.5%, 8=50.9%, 16=6.7%, 32=0.0%, >=64=0.0% 00:33:46.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.291 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.291 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.291 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.291 filename2: (groupid=0, jobs=1): err= 0: pid=1362084: Mon Apr 15 23:00:31 2024 00:33:46.291 read: IOPS=527, BW=2111KiB/s (2162kB/s)(20.6MiB/10005msec) 00:33:46.291 slat (usec): min=5, max=124, avg=27.76, stdev=20.64 00:33:46.291 clat (usec): min=8970, max=37643, avg=30047.57, stdev=1558.93 00:33:46.291 lat (usec): min=8976, max=37653, avg=30075.33, stdev=1559.63 00:33:46.291 clat percentiles (usec): 00:33:46.291 | 1.00th=[28181], 5.00th=[28967], 10.00th=[29492], 20.00th=[29754], 00:33:46.291 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:46.291 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31327], 00:33:46.291 | 99.00th=[32637], 99.50th=[32900], 99.90th=[33817], 99.95th=[33817], 00:33:46.291 | 99.99th=[37487] 00:33:46.291 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2101.89, stdev=63.38, samples=19 00:33:46.291 iops : min= 512, max= 544, avg=525.47, stdev=15.84, samples=19 00:33:46.291 lat (msec) : 10=0.30%, 20=0.30%, 50=99.39% 00:33:46.291 cpu : usr=99.30%, sys=0.40%, ctx=74, majf=0, minf=9 00:33:46.291 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:33:46.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.291 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.291 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.291 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.291 filename2: (groupid=0, jobs=1): err= 0: pid=1362085: Mon Apr 15 23:00:31 2024 00:33:46.291 read: IOPS=525, BW=2104KiB/s (2154kB/s)(20.6MiB/10016msec) 00:33:46.291 slat (usec): min=5, max=128, avg=24.46, stdev=22.85 00:33:46.291 clat (usec): min=15175, max=51452, avg=30204.43, stdev=4186.94 00:33:46.291 lat (usec): min=15206, max=51458, avg=30228.89, stdev=4187.44 00:33:46.291 clat percentiles (usec): 00:33:46.291 | 1.00th=[16581], 5.00th=[22676], 10.00th=[28705], 20.00th=[29492], 00:33:46.291 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:46.291 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31589], 95.00th=[35390], 00:33:46.291 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51119], 99.95th=[51643], 00:33:46.291 | 99.99th=[51643] 00:33:46.291 bw ( KiB/s): min= 1923, max= 2224, per=4.17%, avg=2101.75, stdev=73.99, samples=20 00:33:46.291 iops : min= 480, max= 556, avg=525.40, stdev=18.59, samples=20 00:33:46.291 lat (msec) : 20=2.79%, 50=96.66%, 100=0.55% 00:33:46.291 cpu : usr=99.04%, sys=0.55%, ctx=130, majf=0, minf=9 00:33:46.291 IO depths : 1=3.7%, 2=8.3%, 4=19.2%, 8=59.4%, 16=9.5%, 32=0.0%, >=64=0.0% 00:33:46.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.291 complete : 0=0.0%, 4=92.7%, 8=2.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.291 issued rwts: total=5268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.291 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.291 00:33:46.291 Run status group 0 (all jobs): 00:33:46.291 READ: bw=49.2MiB/s (51.5MB/s), 2015KiB/s-2171KiB/s (2063kB/s-2224kB/s), io=493MiB (517MB), run=10001-10032msec 00:33:46.551 23:00:31 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:46.551 23:00:31 -- target/dif.sh@43 -- # local sub 00:33:46.551 23:00:31 -- target/dif.sh@45 -- # for sub in "$@" 00:33:46.551 23:00:31 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:46.551 23:00:31 -- target/dif.sh@36 -- # local sub_id=0 00:33:46.551 23:00:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:46.551 23:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.551 23:00:31 -- common/autotest_common.sh@10 -- # set +x 00:33:46.551 23:00:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.551 23:00:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:46.551 23:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.552 23:00:31 -- common/autotest_common.sh@10 -- # set +x 00:33:46.552 23:00:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.552 23:00:31 -- target/dif.sh@45 -- # for sub in "$@" 00:33:46.552 23:00:31 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:46.552 23:00:31 -- target/dif.sh@36 -- # local sub_id=1 00:33:46.552 23:00:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:46.552 23:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.552 23:00:31 -- common/autotest_common.sh@10 -- # set +x 00:33:46.552 23:00:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.552 23:00:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:46.552 23:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.552 23:00:31 -- common/autotest_common.sh@10 -- # set +x 00:33:46.552 23:00:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.552 23:00:31 -- target/dif.sh@45 -- # for sub in "$@" 00:33:46.552 23:00:31 -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:46.552 23:00:31 -- target/dif.sh@36 -- # local sub_id=2 00:33:46.552 23:00:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:46.552 23:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.552 23:00:31 -- common/autotest_common.sh@10 -- # set +x 00:33:46.552 23:00:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.552 23:00:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:46.552 23:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.552 23:00:31 -- common/autotest_common.sh@10 -- # set +x 00:33:46.552 23:00:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.552 23:00:31 -- target/dif.sh@115 -- # NULL_DIF=1 00:33:46.552 23:00:31 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:46.552 23:00:31 -- target/dif.sh@115 -- # numjobs=2 00:33:46.552 23:00:31 -- target/dif.sh@115 -- # iodepth=8 00:33:46.552 23:00:31 -- target/dif.sh@115 -- # runtime=5 00:33:46.552 23:00:31 -- target/dif.sh@115 -- # files=1 00:33:46.552 23:00:31 -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:46.552 23:00:31 -- target/dif.sh@28 -- # local sub 00:33:46.552 23:00:31 -- target/dif.sh@30 -- # for sub in "$@" 00:33:46.552 23:00:31 -- target/dif.sh@31 -- # create_subsystem 0 00:33:46.552 23:00:31 -- target/dif.sh@18 -- # local sub_id=0 00:33:46.552 23:00:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:46.552 23:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.552 23:00:31 -- common/autotest_common.sh@10 -- # set +x 00:33:46.552 bdev_null0 00:33:46.552 23:00:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.552 23:00:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:46.552 23:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.552 23:00:31 -- common/autotest_common.sh@10 -- # set +x 00:33:46.552 23:00:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.552 23:00:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:46.552 23:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.552 23:00:31 -- common/autotest_common.sh@10 -- # set +x 00:33:46.552 23:00:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.552 23:00:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:46.552 23:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.552 23:00:31 -- common/autotest_common.sh@10 -- # set +x 00:33:46.552 [2024-04-15 23:00:31.279693] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:46.552 23:00:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.552 23:00:31 -- target/dif.sh@30 -- # for sub in "$@" 00:33:46.552 23:00:31 -- target/dif.sh@31 -- # create_subsystem 1 00:33:46.552 23:00:31 -- target/dif.sh@18 -- # local sub_id=1 00:33:46.552 23:00:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:46.552 23:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.552 23:00:31 -- common/autotest_common.sh@10 -- # set +x 00:33:46.552 bdev_null1 00:33:46.552 23:00:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.552 23:00:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:46.552 23:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.552 23:00:31 -- common/autotest_common.sh@10 -- # set +x 00:33:46.552 23:00:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.552 23:00:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:46.552 23:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.552 23:00:31 -- common/autotest_common.sh@10 -- # set +x 00:33:46.552 23:00:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.552 23:00:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:46.552 23:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.552 23:00:31 -- common/autotest_common.sh@10 -- # set +x 00:33:46.552 23:00:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.552 23:00:31 -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:46.552 23:00:31 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:46.552 23:00:31 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:46.552 23:00:31 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:46.552 23:00:31 -- nvmf/common.sh@520 -- # config=() 00:33:46.552 23:00:31 -- nvmf/common.sh@520 -- # local subsystem config 00:33:46.552 23:00:31 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:46.552 23:00:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:46.552 23:00:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:46.552 { 00:33:46.552 "params": { 00:33:46.552 "name": "Nvme$subsystem", 00:33:46.552 "trtype": "$TEST_TRANSPORT", 00:33:46.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:46.552 "adrfam": "ipv4", 00:33:46.552 "trsvcid": "$NVMF_PORT", 00:33:46.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:46.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:46.552 "hdgst": ${hdgst:-false}, 00:33:46.552 "ddgst": ${ddgst:-false} 00:33:46.552 }, 00:33:46.552 "method": "bdev_nvme_attach_controller" 00:33:46.552 } 00:33:46.552 EOF 00:33:46.552 )") 00:33:46.552 23:00:31 -- target/dif.sh@82 -- # gen_fio_conf 00:33:46.552 23:00:31 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:46.552 23:00:31 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:46.552 23:00:31 -- target/dif.sh@54 -- # local file 00:33:46.552 23:00:31 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:46.552 23:00:31 -- target/dif.sh@56 -- # cat 00:33:46.552 23:00:31 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:46.552 23:00:31 -- common/autotest_common.sh@1320 -- # shift 00:33:46.552 23:00:31 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:46.552 23:00:31 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:46.552 23:00:31 -- nvmf/common.sh@542 -- # cat 00:33:46.552 23:00:31 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:46.552 23:00:31 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:46.552 23:00:31 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:46.552 23:00:31 -- target/dif.sh@72 -- # (( file <= files )) 00:33:46.552 23:00:31 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:46.552 23:00:31 -- target/dif.sh@73 -- # cat 00:33:46.552 23:00:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:46.552 23:00:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:46.552 { 00:33:46.552 "params": { 00:33:46.552 "name": "Nvme$subsystem", 00:33:46.552 "trtype": "$TEST_TRANSPORT", 00:33:46.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:46.552 "adrfam": "ipv4", 00:33:46.552 "trsvcid": "$NVMF_PORT", 00:33:46.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:46.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:46.552 "hdgst": ${hdgst:-false}, 00:33:46.552 "ddgst": ${ddgst:-false} 00:33:46.552 }, 00:33:46.552 "method": "bdev_nvme_attach_controller" 00:33:46.552 } 00:33:46.552 EOF 00:33:46.552 )") 00:33:46.552 23:00:31 -- target/dif.sh@72 -- # (( file++ )) 00:33:46.552 23:00:31 -- target/dif.sh@72 -- # (( file <= files )) 00:33:46.552 23:00:31 -- nvmf/common.sh@542 -- # cat 00:33:46.552 23:00:31 -- nvmf/common.sh@544 -- # jq . 00:33:46.552 23:00:31 -- nvmf/common.sh@545 -- # IFS=, 00:33:46.552 23:00:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:46.552 "params": { 00:33:46.552 "name": "Nvme0", 00:33:46.552 "trtype": "tcp", 00:33:46.552 "traddr": "10.0.0.2", 00:33:46.552 "adrfam": "ipv4", 00:33:46.552 "trsvcid": "4420", 00:33:46.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:46.552 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:46.552 "hdgst": false, 00:33:46.552 "ddgst": false 00:33:46.552 }, 00:33:46.552 "method": "bdev_nvme_attach_controller" 00:33:46.552 },{ 00:33:46.552 "params": { 00:33:46.552 "name": "Nvme1", 00:33:46.552 "trtype": "tcp", 00:33:46.552 "traddr": "10.0.0.2", 00:33:46.552 "adrfam": "ipv4", 00:33:46.552 "trsvcid": "4420", 00:33:46.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:46.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:46.552 "hdgst": false, 00:33:46.552 "ddgst": false 00:33:46.552 }, 00:33:46.552 "method": "bdev_nvme_attach_controller" 00:33:46.552 }' 00:33:46.834 23:00:31 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:46.834 23:00:31 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:46.835 23:00:31 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:46.835 23:00:31 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:46.835 23:00:31 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:46.835 23:00:31 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:46.835 23:00:31 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:46.835 23:00:31 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:46.835 23:00:31 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:46.835 23:00:31 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:47.101 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:47.101 ... 00:33:47.101 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:47.101 ... 00:33:47.101 fio-3.35 00:33:47.101 Starting 4 threads 00:33:47.101 EAL: No free 2048 kB hugepages reported on node 1 00:33:47.671 [2024-04-15 23:00:32.324380] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:47.671 [2024-04-15 23:00:32.324417] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:52.984 00:33:52.984 filename0: (groupid=0, jobs=1): err= 0: pid=1364591: Mon Apr 15 23:00:37 2024 00:33:52.984 read: IOPS=2200, BW=17.2MiB/s (18.0MB/s)(86.0MiB/5001msec) 00:33:52.984 slat (nsec): min=5327, max=56844, avg=6127.66, stdev=2239.70 00:33:52.984 clat (usec): min=2118, max=6947, avg=3617.28, stdev=402.98 00:33:52.984 lat (usec): min=2124, max=6953, avg=3623.41, stdev=402.94 00:33:52.984 clat percentiles (usec): 00:33:52.984 | 1.00th=[ 2900], 5.00th=[ 3228], 10.00th=[ 3359], 20.00th=[ 3392], 00:33:52.984 | 30.00th=[ 3425], 40.00th=[ 3589], 50.00th=[ 3589], 60.00th=[ 3621], 00:33:52.984 | 70.00th=[ 3621], 80.00th=[ 3654], 90.00th=[ 3884], 95.00th=[ 4424], 00:33:52.984 | 99.00th=[ 5407], 99.50th=[ 5538], 99.90th=[ 6128], 99.95th=[ 6325], 00:33:52.984 | 99.99th=[ 6915] 00:33:52.984 bw ( KiB/s): min=16896, max=18000, per=25.00%, avg=17575.33, stdev=457.82, samples=9 00:33:52.984 iops : min= 2112, max= 2250, avg=2196.89, stdev=57.26, samples=9 00:33:52.984 lat (msec) : 4=91.91%, 10=8.09% 00:33:52.984 cpu : usr=97.48%, sys=2.32%, ctx=8, majf=0, minf=53 00:33:52.984 IO depths : 1=0.1%, 2=0.5%, 4=73.8%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.984 complete : 0=0.0%, 4=90.7%, 8=9.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.984 issued rwts: total=11007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.984 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:52.984 filename0: (groupid=0, jobs=1): err= 0: pid=1364592: Mon Apr 15 23:00:37 2024 00:33:52.984 read: IOPS=2209, BW=17.3MiB/s (18.1MB/s)(86.4MiB/5004msec) 00:33:52.984 slat (nsec): min=7768, max=57022, avg=8627.73, stdev=2128.37 00:33:52.984 clat (usec): min=1672, max=6637, avg=3596.15, stdev=399.12 00:33:52.984 lat (usec): min=1696, max=6645, avg=3604.78, stdev=398.98 00:33:52.984 clat percentiles (usec): 00:33:52.984 | 1.00th=[ 2769], 5.00th=[ 3163], 10.00th=[ 3326], 20.00th=[ 3392], 00:33:52.984 | 30.00th=[ 3425], 40.00th=[ 3556], 50.00th=[ 3589], 60.00th=[ 3621], 00:33:52.984 | 70.00th=[ 3621], 80.00th=[ 3654], 90.00th=[ 3851], 95.00th=[ 4424], 00:33:52.984 | 99.00th=[ 5276], 99.50th=[ 5538], 99.90th=[ 6063], 99.95th=[ 6128], 00:33:52.984 | 99.99th=[ 6652] 00:33:52.984 bw ( KiB/s): min=16912, max=18048, per=25.13%, avg=17662.22, stdev=379.26, samples=9 00:33:52.984 iops : min= 2114, max= 2256, avg=2207.78, stdev=47.41, samples=9 00:33:52.984 lat (msec) : 2=0.07%, 4=92.05%, 10=7.88% 00:33:52.984 cpu : usr=96.94%, sys=2.80%, ctx=17, majf=0, minf=33 00:33:52.984 IO depths : 1=0.1%, 2=0.4%, 4=73.6%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.984 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.984 issued rwts: total=11055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.984 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:52.984 filename1: (groupid=0, jobs=1): err= 0: pid=1364593: Mon Apr 15 23:00:37 2024 00:33:52.984 read: IOPS=2199, BW=17.2MiB/s (18.0MB/s)(86.0MiB/5003msec) 00:33:52.984 slat (nsec): min=5333, max=29586, avg=6033.63, stdev=1826.04 00:33:52.984 clat (usec): min=2178, max=45110, avg=3618.82, stdev=1182.88 00:33:52.984 lat (usec): min=2184, max=45140, avg=3624.86, stdev=1183.10 00:33:52.984 clat percentiles (usec): 00:33:52.984 | 1.00th=[ 2769], 5.00th=[ 3163], 10.00th=[ 3326], 20.00th=[ 3392], 00:33:52.984 | 30.00th=[ 3425], 40.00th=[ 3556], 50.00th=[ 3589], 60.00th=[ 3589], 00:33:52.984 | 70.00th=[ 3621], 80.00th=[ 3654], 90.00th=[ 3752], 95.00th=[ 4293], 00:33:52.984 | 99.00th=[ 5276], 99.50th=[ 5538], 99.90th=[ 6194], 99.95th=[44827], 00:33:52.984 | 99.99th=[45351] 00:33:52.984 bw ( KiB/s): min=15776, max=18000, per=25.00%, avg=17575.11, stdev=737.13, samples=9 00:33:52.984 iops : min= 1972, max= 2250, avg=2196.89, stdev=92.14, samples=9 00:33:52.984 lat (msec) : 4=93.42%, 10=6.51%, 50=0.07% 00:33:52.984 cpu : usr=97.24%, sys=2.54%, ctx=7, majf=0, minf=61 00:33:52.984 IO depths : 1=0.1%, 2=0.5%, 4=73.8%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.984 complete : 0=0.0%, 4=90.7%, 8=9.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.984 issued rwts: total=11006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.984 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:52.984 filename1: (groupid=0, jobs=1): err= 0: pid=1364594: Mon Apr 15 23:00:37 2024 00:33:52.984 read: IOPS=2179, BW=17.0MiB/s (17.9MB/s)(85.2MiB/5001msec) 00:33:52.984 slat (nsec): min=5330, max=34548, avg=6524.89, stdev=2295.06 00:33:52.984 clat (usec): min=952, max=5910, avg=3652.73, stdev=701.25 00:33:52.984 lat (usec): min=969, max=5916, avg=3659.25, stdev=701.13 00:33:52.984 clat percentiles (usec): 00:33:52.984 | 1.00th=[ 2212], 5.00th=[ 2704], 10.00th=[ 2999], 20.00th=[ 3294], 00:33:52.984 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3523], 60.00th=[ 3556], 00:33:52.984 | 70.00th=[ 3621], 80.00th=[ 3654], 90.00th=[ 5145], 95.00th=[ 5342], 00:33:52.984 | 99.00th=[ 5473], 99.50th=[ 5473], 99.90th=[ 5735], 99.95th=[ 5800], 00:33:52.984 | 99.99th=[ 5932] 00:33:52.984 bw ( KiB/s): min=16512, max=19600, per=24.94%, avg=17534.22, stdev=1205.86, samples=9 00:33:52.984 iops : min= 2064, max= 2450, avg=2191.78, stdev=150.73, samples=9 00:33:52.984 lat (usec) : 1000=0.01% 00:33:52.984 lat (msec) : 2=0.40%, 4=84.61%, 10=14.97% 00:33:52.984 cpu : usr=97.78%, sys=2.00%, ctx=6, majf=0, minf=34 00:33:52.984 IO depths : 1=0.1%, 2=0.4%, 4=72.0%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.984 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.984 issued rwts: total=10900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.984 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:52.984 00:33:52.984 Run status group 0 (all jobs): 00:33:52.984 READ: bw=68.6MiB/s (72.0MB/s), 17.0MiB/s-17.3MiB/s (17.9MB/s-18.1MB/s), io=344MiB (360MB), run=5001-5004msec 00:33:52.984 23:00:37 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:52.984 23:00:37 -- target/dif.sh@43 -- # local sub 00:33:52.984 23:00:37 -- target/dif.sh@45 -- # for sub in "$@" 00:33:52.984 23:00:37 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:52.984 23:00:37 -- target/dif.sh@36 -- # local sub_id=0 00:33:52.984 23:00:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:52.984 23:00:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:52.984 23:00:37 -- common/autotest_common.sh@10 -- # set +x 00:33:52.984 23:00:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:52.984 23:00:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:52.984 23:00:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:52.984 23:00:37 -- common/autotest_common.sh@10 -- # set +x 00:33:52.984 23:00:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:52.984 23:00:37 -- target/dif.sh@45 -- # for sub in "$@" 00:33:52.984 23:00:37 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:52.984 23:00:37 -- target/dif.sh@36 -- # local sub_id=1 00:33:52.984 23:00:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:52.984 23:00:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:52.984 23:00:37 -- common/autotest_common.sh@10 -- # set +x 00:33:52.984 23:00:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:52.984 23:00:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:52.984 23:00:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:52.984 23:00:37 -- common/autotest_common.sh@10 -- # set +x 00:33:52.984 23:00:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:52.984 00:33:52.984 real 0m24.283s 00:33:52.984 user 5m16.719s 00:33:52.984 sys 0m3.958s 00:33:52.984 23:00:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:52.985 23:00:37 -- common/autotest_common.sh@10 -- # set +x 00:33:52.985 ************************************ 00:33:52.985 END TEST fio_dif_rand_params 00:33:52.985 ************************************ 00:33:52.985 23:00:37 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:52.985 23:00:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:52.985 23:00:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:52.985 23:00:37 -- common/autotest_common.sh@10 -- # set +x 00:33:52.985 ************************************ 00:33:52.985 START TEST fio_dif_digest 00:33:52.985 ************************************ 00:33:52.985 23:00:37 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:33:52.985 23:00:37 -- target/dif.sh@123 -- # local NULL_DIF 00:33:52.985 23:00:37 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:52.985 23:00:37 -- target/dif.sh@125 -- # local hdgst ddgst 00:33:52.985 23:00:37 -- target/dif.sh@127 -- # NULL_DIF=3 00:33:52.985 23:00:37 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:52.985 23:00:37 -- target/dif.sh@127 -- # numjobs=3 00:33:52.985 23:00:37 -- target/dif.sh@127 -- # iodepth=3 00:33:52.985 23:00:37 -- target/dif.sh@127 -- # runtime=10 00:33:52.985 23:00:37 -- target/dif.sh@128 -- # hdgst=true 00:33:52.985 23:00:37 -- target/dif.sh@128 -- # ddgst=true 00:33:52.985 23:00:37 -- target/dif.sh@130 -- # create_subsystems 0 00:33:52.985 23:00:37 -- target/dif.sh@28 -- # local sub 00:33:52.985 23:00:37 -- target/dif.sh@30 -- # for sub in "$@" 00:33:52.985 23:00:37 -- target/dif.sh@31 -- # create_subsystem 0 00:33:52.985 23:00:37 -- target/dif.sh@18 -- # local sub_id=0 00:33:52.985 23:00:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:52.985 23:00:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:52.985 23:00:37 -- common/autotest_common.sh@10 -- # set +x 00:33:52.985 bdev_null0 00:33:52.985 23:00:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:52.985 23:00:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:52.985 23:00:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:52.985 23:00:37 -- common/autotest_common.sh@10 -- # set +x 00:33:52.985 23:00:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:52.985 23:00:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:52.985 23:00:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:52.985 23:00:37 -- common/autotest_common.sh@10 -- # set +x 00:33:52.985 23:00:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:52.985 23:00:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:52.985 23:00:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:52.985 23:00:37 -- common/autotest_common.sh@10 -- # set +x 00:33:52.985 [2024-04-15 23:00:37.717371] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.985 23:00:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:52.985 23:00:37 -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:52.985 23:00:37 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:52.985 23:00:37 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:52.985 23:00:37 -- nvmf/common.sh@520 -- # config=() 00:33:52.985 23:00:37 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.985 23:00:37 -- nvmf/common.sh@520 -- # local subsystem config 00:33:52.985 23:00:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:52.985 23:00:37 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.985 23:00:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:52.985 { 00:33:52.985 "params": { 00:33:52.985 "name": "Nvme$subsystem", 00:33:52.985 "trtype": "$TEST_TRANSPORT", 00:33:52.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.985 "adrfam": "ipv4", 00:33:52.985 "trsvcid": "$NVMF_PORT", 00:33:52.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.985 "hdgst": ${hdgst:-false}, 00:33:52.985 "ddgst": ${ddgst:-false} 00:33:52.985 }, 00:33:52.985 "method": "bdev_nvme_attach_controller" 00:33:52.985 } 00:33:52.985 EOF 00:33:52.985 )") 00:33:52.985 23:00:37 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:52.985 23:00:37 -- target/dif.sh@82 -- # gen_fio_conf 00:33:52.985 23:00:37 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:52.985 23:00:37 -- target/dif.sh@54 -- # local file 00:33:52.985 23:00:37 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:52.985 23:00:37 -- target/dif.sh@56 -- # cat 00:33:52.985 23:00:37 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.985 23:00:37 -- common/autotest_common.sh@1320 -- # shift 00:33:52.985 23:00:37 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:52.985 23:00:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:52.985 23:00:37 -- nvmf/common.sh@542 -- # cat 00:33:52.985 23:00:37 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.985 23:00:37 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:52.985 23:00:37 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:52.985 23:00:37 -- target/dif.sh@72 -- # (( file <= files )) 00:33:52.985 23:00:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:52.985 23:00:37 -- nvmf/common.sh@544 -- # jq . 00:33:52.985 23:00:37 -- nvmf/common.sh@545 -- # IFS=, 00:33:52.985 23:00:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:52.985 "params": { 00:33:52.985 "name": "Nvme0", 00:33:52.985 "trtype": "tcp", 00:33:52.985 "traddr": "10.0.0.2", 00:33:52.985 "adrfam": "ipv4", 00:33:52.985 "trsvcid": "4420", 00:33:52.985 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:52.985 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:52.985 "hdgst": true, 00:33:52.985 "ddgst": true 00:33:52.985 }, 00:33:52.985 "method": "bdev_nvme_attach_controller" 00:33:52.985 }' 00:33:52.985 23:00:37 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:52.985 23:00:37 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:52.985 23:00:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:52.985 23:00:37 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.985 23:00:37 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:52.985 23:00:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:53.267 23:00:37 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:53.267 23:00:37 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:53.267 23:00:37 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:53.267 23:00:37 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:53.531 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:53.531 ... 00:33:53.531 fio-3.35 00:33:53.531 Starting 3 threads 00:33:53.531 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.791 [2024-04-15 23:00:38.402395] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:53.791 [2024-04-15 23:00:38.402451] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:03.799 00:34:03.799 filename0: (groupid=0, jobs=1): err= 0: pid=1365808: Mon Apr 15 23:00:48 2024 00:34:03.799 read: IOPS=204, BW=25.5MiB/s (26.8MB/s)(257MiB/10049msec) 00:34:03.799 slat (nsec): min=5736, max=29459, avg=7020.48, stdev=1248.73 00:34:03.799 clat (usec): min=7155, max=95589, avg=14654.89, stdev=7680.58 00:34:03.799 lat (usec): min=7161, max=95597, avg=14661.91, stdev=7680.61 00:34:03.799 clat percentiles (usec): 00:34:03.799 | 1.00th=[ 9241], 5.00th=[10290], 10.00th=[11207], 20.00th=[12518], 00:34:03.799 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:34:03.799 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15401], 95.00th=[16057], 00:34:03.799 | 99.00th=[55313], 99.50th=[56361], 99.90th=[94897], 99.95th=[94897], 00:34:03.799 | 99.99th=[95945] 00:34:03.799 bw ( KiB/s): min=17664, max=30720, per=32.54%, avg=26252.80, stdev=3364.05, samples=20 00:34:03.799 iops : min= 138, max= 240, avg=205.10, stdev=26.28, samples=20 00:34:03.799 lat (msec) : 10=3.46%, 20=93.91%, 50=0.10%, 100=2.53% 00:34:03.799 cpu : usr=96.62%, sys=3.16%, ctx=16, majf=0, minf=163 00:34:03.799 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.799 issued rwts: total=2053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.799 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:03.799 filename0: (groupid=0, jobs=1): err= 0: pid=1365809: Mon Apr 15 23:00:48 2024 00:34:03.799 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(263MiB/10049msec) 00:34:03.799 slat (nsec): min=5900, max=61719, avg=8998.07, stdev=2001.42 00:34:03.799 clat (usec): min=7896, max=95455, avg=14275.46, stdev=6716.04 00:34:03.799 lat (usec): min=7902, max=95465, avg=14284.46, stdev=6716.01 00:34:03.799 clat percentiles (usec): 00:34:03.799 | 1.00th=[ 9241], 5.00th=[10421], 10.00th=[11076], 20.00th=[12256], 00:34:03.799 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13566], 60.00th=[13829], 00:34:03.799 | 70.00th=[14222], 80.00th=[14615], 90.00th=[15270], 95.00th=[15795], 00:34:03.799 | 99.00th=[55313], 99.50th=[55837], 99.90th=[57934], 99.95th=[94897], 00:34:03.799 | 99.99th=[95945] 00:34:03.799 bw ( KiB/s): min=22784, max=30720, per=33.40%, avg=26944.00, stdev=2160.89, samples=20 00:34:03.799 iops : min= 178, max= 240, avg=210.50, stdev=16.88, samples=20 00:34:03.799 lat (msec) : 10=3.08%, 20=94.64%, 50=0.05%, 100=2.23% 00:34:03.799 cpu : usr=95.84%, sys=3.90%, ctx=19, majf=0, minf=47 00:34:03.799 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.799 issued rwts: total=2107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.799 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:03.799 filename0: (groupid=0, jobs=1): err= 0: pid=1365810: Mon Apr 15 23:00:48 2024 00:34:03.799 read: IOPS=216, BW=27.0MiB/s (28.4MB/s)(272MiB/10047msec) 00:34:03.799 slat (nsec): min=5721, max=55929, avg=8111.89, stdev=1776.18 00:34:03.799 clat (usec): min=8332, max=95371, avg=13833.28, stdev=4471.55 00:34:03.799 lat (usec): min=8339, max=95377, avg=13841.39, stdev=4471.63 00:34:03.799 clat percentiles (usec): 00:34:03.799 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[10683], 20.00th=[12125], 00:34:03.799 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13829], 60.00th=[14091], 00:34:03.799 | 70.00th=[14353], 80.00th=[14877], 90.00th=[15401], 95.00th=[15926], 00:34:03.799 | 99.00th=[20841], 99.50th=[54789], 99.90th=[56361], 99.95th=[56361], 00:34:03.799 | 99.99th=[94897] 00:34:03.799 bw ( KiB/s): min=22528, max=30976, per=34.46%, avg=27801.60, stdev=2105.14, samples=20 00:34:03.799 iops : min= 176, max= 242, avg=217.20, stdev=16.45, samples=20 00:34:03.799 lat (msec) : 10=4.19%, 20=94.80%, 50=0.14%, 100=0.87% 00:34:03.799 cpu : usr=96.06%, sys=3.38%, ctx=564, majf=0, minf=127 00:34:03.799 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.799 issued rwts: total=2174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.799 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:03.799 00:34:03.799 Run status group 0 (all jobs): 00:34:03.799 READ: bw=78.8MiB/s (82.6MB/s), 25.5MiB/s-27.0MiB/s (26.8MB/s-28.4MB/s), io=792MiB (830MB), run=10047-10049msec 00:34:04.061 23:00:48 -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:04.061 23:00:48 -- target/dif.sh@43 -- # local sub 00:34:04.061 23:00:48 -- target/dif.sh@45 -- # for sub in "$@" 00:34:04.061 23:00:48 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:04.061 23:00:48 -- target/dif.sh@36 -- # local sub_id=0 00:34:04.061 23:00:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:04.061 23:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:04.061 23:00:48 -- common/autotest_common.sh@10 -- # set +x 00:34:04.061 23:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:04.061 23:00:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:04.061 23:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:04.061 23:00:48 -- common/autotest_common.sh@10 -- # set +x 00:34:04.061 23:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:04.061 00:34:04.061 real 0m11.026s 00:34:04.061 user 0m44.384s 00:34:04.061 sys 0m1.318s 00:34:04.061 23:00:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:04.061 23:00:48 -- common/autotest_common.sh@10 -- # set +x 00:34:04.061 ************************************ 00:34:04.061 END TEST fio_dif_digest 00:34:04.061 ************************************ 00:34:04.061 23:00:48 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:04.061 23:00:48 -- target/dif.sh@147 -- # nvmftestfini 00:34:04.061 23:00:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:04.061 23:00:48 -- nvmf/common.sh@116 -- # sync 00:34:04.061 23:00:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:04.061 23:00:48 -- nvmf/common.sh@119 -- # set +e 00:34:04.061 23:00:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:04.061 23:00:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:04.061 rmmod nvme_tcp 00:34:04.061 rmmod nvme_fabrics 00:34:04.061 rmmod nvme_keyring 00:34:04.061 23:00:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:04.061 23:00:48 -- nvmf/common.sh@123 -- # set -e 00:34:04.061 23:00:48 -- nvmf/common.sh@124 -- # return 0 00:34:04.061 23:00:48 -- nvmf/common.sh@477 -- # '[' -n 1354969 ']' 00:34:04.061 23:00:48 -- nvmf/common.sh@478 -- # killprocess 1354969 00:34:04.061 23:00:48 -- common/autotest_common.sh@926 -- # '[' -z 1354969 ']' 00:34:04.061 23:00:48 -- common/autotest_common.sh@930 -- # kill -0 1354969 00:34:04.061 23:00:48 -- common/autotest_common.sh@931 -- # uname 00:34:04.061 23:00:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:04.061 23:00:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1354969 00:34:04.061 23:00:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:04.061 23:00:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:04.061 23:00:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1354969' 00:34:04.061 killing process with pid 1354969 00:34:04.061 23:00:48 -- common/autotest_common.sh@945 -- # kill 1354969 00:34:04.061 23:00:48 -- common/autotest_common.sh@950 -- # wait 1354969 00:34:04.321 23:00:48 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:34:04.321 23:00:48 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:08.528 Waiting for block devices as requested 00:34:08.528 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:08.528 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:08.528 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:08.528 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:08.528 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:08.528 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:08.528 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:08.528 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:08.789 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:34:08.789 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:09.049 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:09.049 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:09.049 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:09.050 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:09.309 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:09.309 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:09.309 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:09.569 23:00:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:09.569 23:00:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:09.569 23:00:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:09.569 23:00:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:09.569 23:00:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.569 23:00:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:09.569 23:00:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.113 23:00:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:12.113 00:34:12.113 real 1m18.887s 00:34:12.113 user 8m1.892s 00:34:12.113 sys 0m20.291s 00:34:12.113 23:00:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:12.113 23:00:56 -- common/autotest_common.sh@10 -- # set +x 00:34:12.113 ************************************ 00:34:12.113 END TEST nvmf_dif 00:34:12.113 ************************************ 00:34:12.113 23:00:56 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:12.113 23:00:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:12.113 23:00:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:12.113 23:00:56 -- common/autotest_common.sh@10 -- # set +x 00:34:12.113 ************************************ 00:34:12.113 START TEST nvmf_abort_qd_sizes 00:34:12.113 ************************************ 00:34:12.113 23:00:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:12.113 * Looking for test storage... 00:34:12.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:12.113 23:00:56 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:12.113 23:00:56 -- nvmf/common.sh@7 -- # uname -s 00:34:12.113 23:00:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:12.113 23:00:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:12.113 23:00:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:12.113 23:00:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:12.113 23:00:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:12.114 23:00:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:12.114 23:00:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:12.114 23:00:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:12.114 23:00:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:12.114 23:00:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:12.114 23:00:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:12.114 23:00:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:12.114 23:00:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:12.114 23:00:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:12.114 23:00:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:12.114 23:00:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:12.114 23:00:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:12.114 23:00:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:12.114 23:00:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:12.114 23:00:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.114 23:00:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.114 23:00:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.114 23:00:56 -- paths/export.sh@5 -- # export PATH 00:34:12.114 23:00:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.114 23:00:56 -- nvmf/common.sh@46 -- # : 0 00:34:12.114 23:00:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:12.114 23:00:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:12.114 23:00:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:12.114 23:00:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:12.114 23:00:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:12.114 23:00:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:12.114 23:00:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:12.114 23:00:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:12.114 23:00:56 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:34:12.114 23:00:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:12.114 23:00:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:12.114 23:00:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:12.114 23:00:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:12.114 23:00:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:12.114 23:00:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.114 23:00:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:12.114 23:00:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.114 23:00:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:34:12.114 23:00:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:34:12.114 23:00:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:34:12.114 23:00:56 -- common/autotest_common.sh@10 -- # set +x 00:34:20.249 23:01:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:20.249 23:01:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:20.249 23:01:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:20.249 23:01:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:20.249 23:01:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:20.249 23:01:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:20.249 23:01:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:20.249 23:01:04 -- nvmf/common.sh@294 -- # net_devs=() 00:34:20.249 23:01:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:20.249 23:01:04 -- nvmf/common.sh@295 -- # e810=() 00:34:20.249 23:01:04 -- nvmf/common.sh@295 -- # local -ga e810 00:34:20.249 23:01:04 -- nvmf/common.sh@296 -- # x722=() 00:34:20.249 23:01:04 -- nvmf/common.sh@296 -- # local -ga x722 00:34:20.249 23:01:04 -- nvmf/common.sh@297 -- # mlx=() 00:34:20.249 23:01:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:20.249 23:01:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:20.249 23:01:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:20.249 23:01:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:20.249 23:01:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:20.249 23:01:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:20.249 23:01:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:20.249 23:01:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:20.249 23:01:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:20.249 23:01:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:20.249 23:01:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:20.249 23:01:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:20.249 23:01:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:20.249 23:01:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:20.249 23:01:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:34:20.249 23:01:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:34:20.249 23:01:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:34:20.249 23:01:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:20.249 23:01:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:20.249 23:01:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:20.249 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:20.249 23:01:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:20.249 23:01:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:20.249 23:01:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.249 23:01:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.249 23:01:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:20.249 23:01:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:20.249 23:01:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:20.249 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:20.249 23:01:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:20.249 23:01:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:20.249 23:01:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.249 23:01:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.249 23:01:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:20.249 23:01:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:20.249 23:01:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:34:20.249 23:01:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:34:20.249 23:01:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:20.249 23:01:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.249 23:01:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:20.249 23:01:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.249 23:01:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:20.249 Found net devices under 0000:31:00.0: cvl_0_0 00:34:20.249 23:01:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.249 23:01:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:20.249 23:01:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.249 23:01:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:20.249 23:01:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.249 23:01:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:20.249 Found net devices under 0000:31:00.1: cvl_0_1 00:34:20.249 23:01:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.249 23:01:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:20.249 23:01:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:20.249 23:01:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:20.249 23:01:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:20.249 23:01:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:20.249 23:01:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:20.249 23:01:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:20.249 23:01:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:20.249 23:01:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:20.249 23:01:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:20.249 23:01:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:20.249 23:01:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:20.249 23:01:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:20.249 23:01:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:20.249 23:01:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:20.249 23:01:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:20.249 23:01:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:20.249 23:01:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:20.249 23:01:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:20.249 23:01:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:20.249 23:01:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:20.249 23:01:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:20.249 23:01:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:20.249 23:01:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:20.249 23:01:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:20.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:20.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:34:20.249 00:34:20.249 --- 10.0.0.2 ping statistics --- 00:34:20.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.249 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:34:20.249 23:01:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:20.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:20.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:34:20.249 00:34:20.249 --- 10.0.0.1 ping statistics --- 00:34:20.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.249 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:34:20.249 23:01:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:20.249 23:01:04 -- nvmf/common.sh@410 -- # return 0 00:34:20.249 23:01:04 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:34:20.249 23:01:04 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:23.550 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:23.550 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:23.550 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:23.550 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:23.550 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:23.550 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:23.550 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:23.550 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:23.550 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:23.550 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:23.550 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:23.550 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:23.550 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:23.811 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:23.811 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:23.811 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:23.811 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:34:24.071 23:01:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:24.071 23:01:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:24.071 23:01:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:24.071 23:01:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:24.071 23:01:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:24.071 23:01:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:24.071 23:01:08 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:34:24.071 23:01:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:24.071 23:01:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:24.071 23:01:08 -- common/autotest_common.sh@10 -- # set +x 00:34:24.071 23:01:08 -- nvmf/common.sh@469 -- # nvmfpid=1376243 00:34:24.071 23:01:08 -- nvmf/common.sh@470 -- # waitforlisten 1376243 00:34:24.071 23:01:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:24.071 23:01:08 -- common/autotest_common.sh@819 -- # '[' -z 1376243 ']' 00:34:24.071 23:01:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:24.071 23:01:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:24.071 23:01:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:24.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:24.071 23:01:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:24.071 23:01:08 -- common/autotest_common.sh@10 -- # set +x 00:34:24.071 [2024-04-15 23:01:08.813628] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:34:24.071 [2024-04-15 23:01:08.813712] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:24.071 EAL: No free 2048 kB hugepages reported on node 1 00:34:24.331 [2024-04-15 23:01:08.888400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:24.331 [2024-04-15 23:01:08.954029] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:24.331 [2024-04-15 23:01:08.954163] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:24.331 [2024-04-15 23:01:08.954173] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:24.331 [2024-04-15 23:01:08.954182] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:24.331 [2024-04-15 23:01:08.954285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:24.331 [2024-04-15 23:01:08.954397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:24.331 [2024-04-15 23:01:08.954536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:24.331 [2024-04-15 23:01:08.954536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:24.900 23:01:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:24.900 23:01:09 -- common/autotest_common.sh@852 -- # return 0 00:34:24.900 23:01:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:24.901 23:01:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:24.901 23:01:09 -- common/autotest_common.sh@10 -- # set +x 00:34:24.901 23:01:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:24.901 23:01:09 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:24.901 23:01:09 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:34:24.901 23:01:09 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:34:24.901 23:01:09 -- scripts/common.sh@311 -- # local bdf bdfs 00:34:24.901 23:01:09 -- scripts/common.sh@312 -- # local nvmes 00:34:24.901 23:01:09 -- scripts/common.sh@314 -- # [[ -n 0000:65:00.0 ]] 00:34:24.901 23:01:09 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:24.901 23:01:09 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:34:24.901 23:01:09 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:34:24.901 23:01:09 -- scripts/common.sh@322 -- # uname -s 00:34:24.901 23:01:09 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:34:24.901 23:01:09 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:34:24.901 23:01:09 -- scripts/common.sh@327 -- # (( 1 )) 00:34:24.901 23:01:09 -- scripts/common.sh@328 -- # printf '%s\n' 0000:65:00.0 00:34:24.901 23:01:09 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:34:24.901 23:01:09 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:65:00.0 00:34:24.901 23:01:09 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:34:24.901 23:01:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:24.901 23:01:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:24.901 23:01:09 -- common/autotest_common.sh@10 -- # set +x 00:34:24.901 ************************************ 00:34:24.901 START TEST spdk_target_abort 00:34:24.901 ************************************ 00:34:24.901 23:01:09 -- common/autotest_common.sh@1104 -- # spdk_target 00:34:24.901 23:01:09 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:24.901 23:01:09 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:34:24.901 23:01:09 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:34:24.901 23:01:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:24.901 23:01:09 -- common/autotest_common.sh@10 -- # set +x 00:34:25.161 spdk_targetn1 00:34:25.161 23:01:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:25.161 23:01:09 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:25.161 23:01:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:25.161 23:01:09 -- common/autotest_common.sh@10 -- # set +x 00:34:25.161 [2024-04-15 23:01:09.964499] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:25.421 23:01:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:25.421 23:01:09 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:34:25.421 23:01:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:25.421 23:01:09 -- common/autotest_common.sh@10 -- # set +x 00:34:25.421 23:01:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:25.421 23:01:09 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:34:25.421 23:01:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:25.421 23:01:09 -- common/autotest_common.sh@10 -- # set +x 00:34:25.421 23:01:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:25.421 23:01:09 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:34:25.421 23:01:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:25.421 23:01:09 -- common/autotest_common.sh@10 -- # set +x 00:34:25.421 [2024-04-15 23:01:10.004747] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:25.421 23:01:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:25.421 23:01:10 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:34:25.421 23:01:10 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:25.421 23:01:10 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:25.421 23:01:10 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:25.422 23:01:10 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:25.422 23:01:10 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:34:25.422 23:01:10 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:25.422 23:01:10 -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:25.422 23:01:10 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:25.422 23:01:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:25.422 23:01:10 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:25.422 23:01:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:25.422 23:01:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:25.422 23:01:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:25.422 23:01:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:25.422 23:01:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:25.422 23:01:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:25.422 23:01:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:25.422 23:01:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:34:25.422 23:01:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:25.422 23:01:10 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:34:25.422 EAL: No free 2048 kB hugepages reported on node 1 00:34:25.422 [2024-04-15 23:01:10.115466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:160 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:34:25.422 [2024-04-15 23:01:10.115494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0016 p:1 m:0 dnr:0 00:34:25.422 [2024-04-15 23:01:10.126875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:504 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:34:25.422 [2024-04-15 23:01:10.126892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0041 p:1 m:0 dnr:0 00:34:25.422 [2024-04-15 23:01:10.137034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1128 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:34:25.422 [2024-04-15 23:01:10.137051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:008e p:1 m:0 dnr:0 00:34:25.422 [2024-04-15 23:01:10.137667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1176 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:34:25.422 [2024-04-15 23:01:10.137679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0096 p:1 m:0 dnr:0 00:34:25.422 [2024-04-15 23:01:10.169419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2632 len:8 PRP1 0x2000078be000 PRP2 0x0 00:34:25.422 [2024-04-15 23:01:10.169435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:25.422 [2024-04-15 23:01:10.192959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3696 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:34:25.422 [2024-04-15 23:01:10.192975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00cf p:0 m:0 dnr:0 00:34:25.422 [2024-04-15 23:01:10.195774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3952 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:34:25.422 [2024-04-15 23:01:10.195788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00ef p:0 m:0 dnr:0 00:34:28.747 Initializing NVMe Controllers 00:34:28.747 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:34:28.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:34:28.747 Initialization complete. Launching workers. 00:34:28.747 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 17415, failed: 7 00:34:28.747 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 3574, failed to submit 13848 00:34:28.747 success 693, unsuccess 2881, failed 0 00:34:28.747 23:01:13 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:28.747 23:01:13 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:34:28.747 EAL: No free 2048 kB hugepages reported on node 1 00:34:28.747 [2024-04-15 23:01:13.440692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:1024 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:34:28.747 [2024-04-15 23:01:13.440733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:008b p:1 m:0 dnr:0 00:34:28.747 [2024-04-15 23:01:13.456753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:1432 len:8 PRP1 0x200007c50000 PRP2 0x0 00:34:28.747 [2024-04-15 23:01:13.456777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:00b9 p:1 m:0 dnr:0 00:34:28.747 [2024-04-15 23:01:13.472698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:1752 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:34:28.747 [2024-04-15 23:01:13.472721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:00e4 p:1 m:0 dnr:0 00:34:28.747 [2024-04-15 23:01:13.480690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:1936 len:8 PRP1 0x200007c42000 PRP2 0x0 00:34:28.747 [2024-04-15 23:01:13.480712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:34:28.747 [2024-04-15 23:01:13.528011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:3056 len:8 PRP1 0x200007c4c000 PRP2 0x0 00:34:28.747 [2024-04-15 23:01:13.528034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:0087 p:0 m:0 dnr:0 00:34:28.747 [2024-04-15 23:01:13.535601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:3296 len:8 PRP1 0x200007c5a000 PRP2 0x0 00:34:28.747 [2024-04-15 23:01:13.535623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:009d p:0 m:0 dnr:0 00:34:28.747 [2024-04-15 23:01:13.542191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:3432 len:8 PRP1 0x200007c44000 PRP2 0x0 00:34:28.747 [2024-04-15 23:01:13.542212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:00c0 p:0 m:0 dnr:0 00:34:30.715 [2024-04-15 23:01:15.422696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:46872 len:8 PRP1 0x200007c3e000 PRP2 0x0 00:34:30.715 [2024-04-15 23:01:15.422732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:00ea p:1 m:0 dnr:0 00:34:31.655 [2024-04-15 23:01:16.431567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b140 is same with the state(5) to be set 00:34:31.655 [2024-04-15 23:01:16.431598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b140 is same with the state(5) to be set 00:34:31.655 [2024-04-15 23:01:16.431606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b140 is same with the state(5) to be set 00:34:31.655 [2024-04-15 23:01:16.431612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b140 is same with the state(5) to be set 00:34:31.924 Initializing NVMe Controllers 00:34:31.924 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:34:31.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:34:31.925 Initialization complete. Launching workers. 00:34:31.925 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8603, failed: 8 00:34:31.925 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1203, failed to submit 7408 00:34:31.925 success 372, unsuccess 831, failed 0 00:34:31.925 23:01:16 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:31.925 23:01:16 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:34:31.925 EAL: No free 2048 kB hugepages reported on node 1 00:34:32.498 [2024-04-15 23:01:17.075566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:175 nsid:1 lba:48856 len:8 PRP1 0x200007906000 PRP2 0x0 00:34:32.498 [2024-04-15 23:01:17.075592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:175 cdw0:0 sqhd:00f9 p:0 m:0 dnr:0 00:34:35.037 Initializing NVMe Controllers 00:34:35.037 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:34:35.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:34:35.037 Initialization complete. Launching workers. 00:34:35.037 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 43746, failed: 1 00:34:35.037 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2471, failed to submit 41276 00:34:35.037 success 575, unsuccess 1896, failed 0 00:34:35.037 23:01:19 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:34:35.037 23:01:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:35.037 23:01:19 -- common/autotest_common.sh@10 -- # set +x 00:34:35.037 23:01:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:35.037 23:01:19 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:35.037 23:01:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:35.037 23:01:19 -- common/autotest_common.sh@10 -- # set +x 00:34:36.946 23:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.946 23:01:21 -- target/abort_qd_sizes.sh@62 -- # killprocess 1376243 00:34:36.946 23:01:21 -- common/autotest_common.sh@926 -- # '[' -z 1376243 ']' 00:34:36.946 23:01:21 -- common/autotest_common.sh@930 -- # kill -0 1376243 00:34:36.946 23:01:21 -- common/autotest_common.sh@931 -- # uname 00:34:36.946 23:01:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:36.946 23:01:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1376243 00:34:36.946 23:01:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:36.946 23:01:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:36.946 23:01:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1376243' 00:34:36.946 killing process with pid 1376243 00:34:36.946 23:01:21 -- common/autotest_common.sh@945 -- # kill 1376243 00:34:36.946 23:01:21 -- common/autotest_common.sh@950 -- # wait 1376243 00:34:36.946 00:34:36.946 real 0m12.058s 00:34:36.946 user 0m48.187s 00:34:36.946 sys 0m2.107s 00:34:36.946 23:01:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:36.946 23:01:21 -- common/autotest_common.sh@10 -- # set +x 00:34:36.946 ************************************ 00:34:36.946 END TEST spdk_target_abort 00:34:36.946 ************************************ 00:34:36.946 23:01:21 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:34:36.946 23:01:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:36.946 23:01:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:36.946 23:01:21 -- common/autotest_common.sh@10 -- # set +x 00:34:37.207 ************************************ 00:34:37.207 START TEST kernel_target_abort 00:34:37.207 ************************************ 00:34:37.207 23:01:21 -- common/autotest_common.sh@1104 -- # kernel_target 00:34:37.207 23:01:21 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:34:37.207 23:01:21 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:34:37.207 23:01:21 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:34:37.207 23:01:21 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:34:37.207 23:01:21 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:34:37.207 23:01:21 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:34:37.207 23:01:21 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:37.207 23:01:21 -- nvmf/common.sh@627 -- # local block nvme 00:34:37.207 23:01:21 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:34:37.207 23:01:21 -- nvmf/common.sh@630 -- # modprobe nvmet 00:34:37.207 23:01:21 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:37.207 23:01:21 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:40.504 Waiting for block devices as requested 00:34:40.764 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:40.764 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:40.764 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:41.024 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:41.024 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:41.024 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:41.024 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:41.284 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:41.284 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:34:41.544 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:41.544 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:41.544 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:41.544 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:41.803 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:41.803 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:41.803 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:42.063 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:42.323 23:01:26 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:34:42.323 23:01:26 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:42.323 23:01:26 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:34:42.323 23:01:26 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:34:42.323 23:01:26 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:42.323 No valid GPT data, bailing 00:34:42.323 23:01:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:42.323 23:01:26 -- scripts/common.sh@393 -- # pt= 00:34:42.323 23:01:26 -- scripts/common.sh@394 -- # return 1 00:34:42.323 23:01:26 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:34:42.323 23:01:26 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:34:42.323 23:01:26 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:34:42.323 23:01:26 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:34:42.323 23:01:26 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:42.323 23:01:26 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:34:42.323 23:01:26 -- nvmf/common.sh@654 -- # echo 1 00:34:42.323 23:01:26 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:34:42.323 23:01:26 -- nvmf/common.sh@656 -- # echo 1 00:34:42.323 23:01:26 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:34:42.323 23:01:26 -- nvmf/common.sh@663 -- # echo tcp 00:34:42.323 23:01:26 -- nvmf/common.sh@664 -- # echo 4420 00:34:42.323 23:01:26 -- nvmf/common.sh@665 -- # echo ipv4 00:34:42.323 23:01:26 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:42.323 23:01:27 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:34:42.323 00:34:42.323 Discovery Log Number of Records 2, Generation counter 2 00:34:42.323 =====Discovery Log Entry 0====== 00:34:42.323 trtype: tcp 00:34:42.323 adrfam: ipv4 00:34:42.323 subtype: current discovery subsystem 00:34:42.323 treq: not specified, sq flow control disable supported 00:34:42.323 portid: 1 00:34:42.323 trsvcid: 4420 00:34:42.323 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:42.323 traddr: 10.0.0.1 00:34:42.323 eflags: none 00:34:42.323 sectype: none 00:34:42.323 =====Discovery Log Entry 1====== 00:34:42.323 trtype: tcp 00:34:42.323 adrfam: ipv4 00:34:42.323 subtype: nvme subsystem 00:34:42.323 treq: not specified, sq flow control disable supported 00:34:42.323 portid: 1 00:34:42.323 trsvcid: 4420 00:34:42.323 subnqn: kernel_target 00:34:42.323 traddr: 10.0.0.1 00:34:42.323 eflags: none 00:34:42.323 sectype: none 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:42.323 23:01:27 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:42.323 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.616 Initializing NVMe Controllers 00:34:45.616 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:34:45.616 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:34:45.616 Initialization complete. Launching workers. 00:34:45.616 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 57303, failed: 0 00:34:45.616 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 57303, failed to submit 0 00:34:45.616 success 0, unsuccess 57303, failed 0 00:34:45.616 23:01:30 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:45.616 23:01:30 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:45.616 EAL: No free 2048 kB hugepages reported on node 1 00:34:48.910 Initializing NVMe Controllers 00:34:48.910 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:34:48.910 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:34:48.910 Initialization complete. Launching workers. 00:34:48.910 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 99100, failed: 0 00:34:48.910 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 24994, failed to submit 74106 00:34:48.910 success 0, unsuccess 24994, failed 0 00:34:48.910 23:01:33 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:48.910 23:01:33 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:48.910 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.207 Initializing NVMe Controllers 00:34:52.207 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:34:52.207 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:34:52.207 Initialization complete. Launching workers. 00:34:52.207 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 94910, failed: 0 00:34:52.207 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 23746, failed to submit 71164 00:34:52.207 success 0, unsuccess 23746, failed 0 00:34:52.207 23:01:36 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:34:52.207 23:01:36 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:34:52.207 23:01:36 -- nvmf/common.sh@677 -- # echo 0 00:34:52.207 23:01:36 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:34:52.207 23:01:36 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:34:52.207 23:01:36 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:52.207 23:01:36 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:34:52.207 23:01:36 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:34:52.207 23:01:36 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:34:52.207 00:34:52.207 real 0m14.652s 00:34:52.207 user 0m7.564s 00:34:52.207 sys 0m3.854s 00:34:52.207 23:01:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:52.207 23:01:36 -- common/autotest_common.sh@10 -- # set +x 00:34:52.207 ************************************ 00:34:52.207 END TEST kernel_target_abort 00:34:52.207 ************************************ 00:34:52.207 23:01:36 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:34:52.207 23:01:36 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:34:52.207 23:01:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:52.207 23:01:36 -- nvmf/common.sh@116 -- # sync 00:34:52.207 23:01:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:52.207 23:01:36 -- nvmf/common.sh@119 -- # set +e 00:34:52.207 23:01:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:52.207 23:01:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:52.207 rmmod nvme_tcp 00:34:52.207 rmmod nvme_fabrics 00:34:52.207 rmmod nvme_keyring 00:34:52.207 23:01:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:52.207 23:01:36 -- nvmf/common.sh@123 -- # set -e 00:34:52.207 23:01:36 -- nvmf/common.sh@124 -- # return 0 00:34:52.207 23:01:36 -- nvmf/common.sh@477 -- # '[' -n 1376243 ']' 00:34:52.207 23:01:36 -- nvmf/common.sh@478 -- # killprocess 1376243 00:34:52.207 23:01:36 -- common/autotest_common.sh@926 -- # '[' -z 1376243 ']' 00:34:52.207 23:01:36 -- common/autotest_common.sh@930 -- # kill -0 1376243 00:34:52.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1376243) - No such process 00:34:52.207 23:01:36 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1376243 is not found' 00:34:52.207 Process with pid 1376243 is not found 00:34:52.207 23:01:36 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:34:52.207 23:01:36 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:55.510 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:34:55.510 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:34:55.510 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:34:55.771 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:34:55.771 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:34:55.771 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:34:55.771 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:34:55.771 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:34:55.771 0000:65:00.0 (144d a80a): Already using the nvme driver 00:34:55.771 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:34:55.771 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:34:55.771 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:34:55.771 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:34:56.032 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:34:56.033 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:34:56.033 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:34:56.033 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:34:56.294 23:01:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:56.294 23:01:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:56.294 23:01:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:56.294 23:01:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:56.294 23:01:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:56.294 23:01:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:56.294 23:01:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:58.252 23:01:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:58.252 00:34:58.252 real 0m46.550s 00:34:58.252 user 1m1.555s 00:34:58.252 sys 0m17.564s 00:34:58.252 23:01:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:58.252 23:01:42 -- common/autotest_common.sh@10 -- # set +x 00:34:58.252 ************************************ 00:34:58.252 END TEST nvmf_abort_qd_sizes 00:34:58.252 ************************************ 00:34:58.252 23:01:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:58.252 23:01:43 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:58.252 23:01:43 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:34:58.252 23:01:43 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:34:58.252 23:01:43 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:34:58.252 23:01:43 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:58.252 23:01:43 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:58.252 23:01:43 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:58.252 23:01:43 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:58.252 23:01:43 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:58.252 23:01:43 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:58.253 23:01:43 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:58.253 23:01:43 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:58.253 23:01:43 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:58.253 23:01:43 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:34:58.253 23:01:43 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:34:58.253 23:01:43 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:34:58.253 23:01:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:58.253 23:01:43 -- common/autotest_common.sh@10 -- # set +x 00:34:58.253 23:01:43 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:34:58.253 23:01:43 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:34:58.253 23:01:43 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:34:58.253 23:01:43 -- common/autotest_common.sh@10 -- # set +x 00:35:06.468 INFO: APP EXITING 00:35:06.468 INFO: killing all VMs 00:35:06.468 INFO: killing vhost app 00:35:06.468 INFO: EXIT DONE 00:35:09.772 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:35:09.772 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:35:09.772 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:35:09.772 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:35:09.772 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:35:09.772 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:35:09.772 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:35:09.772 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:35:09.772 0000:65:00.0 (144d a80a): Already using the nvme driver 00:35:09.772 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:35:09.772 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:35:09.772 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:35:09.772 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:35:09.772 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:35:09.772 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:35:09.772 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:35:09.772 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:35:13.982 Cleaning 00:35:13.982 Removing: /var/run/dpdk/spdk0/config 00:35:13.982 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:13.982 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:13.982 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:13.982 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:13.982 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:13.982 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:13.982 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:13.982 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:13.982 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:13.982 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:13.982 Removing: /var/run/dpdk/spdk1/config 00:35:13.982 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:13.982 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:13.982 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:13.982 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:13.982 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:13.982 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:13.982 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:13.982 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:13.982 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:13.982 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:13.982 Removing: /var/run/dpdk/spdk1/mp_socket 00:35:13.982 Removing: /var/run/dpdk/spdk2/config 00:35:13.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:13.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:13.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:13.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:13.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:13.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:13.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:13.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:13.983 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:13.983 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:13.983 Removing: /var/run/dpdk/spdk3/config 00:35:13.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:13.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:13.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:13.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:13.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:13.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:13.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:13.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:13.983 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:13.983 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:13.983 Removing: /var/run/dpdk/spdk4/config 00:35:13.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:13.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:13.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:13.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:13.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:13.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:13.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:13.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:13.983 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:13.983 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:13.983 Removing: /dev/shm/bdev_svc_trace.1 00:35:13.983 Removing: /dev/shm/nvmf_trace.0 00:35:13.983 Removing: /dev/shm/spdk_tgt_trace.pid891081 00:35:13.983 Removing: /var/run/dpdk/spdk0 00:35:13.983 Removing: /var/run/dpdk/spdk1 00:35:13.983 Removing: /var/run/dpdk/spdk2 00:35:13.983 Removing: /var/run/dpdk/spdk3 00:35:13.983 Removing: /var/run/dpdk/spdk4 00:35:13.983 Removing: /var/run/dpdk/spdk_pid1024517 00:35:13.983 Removing: /var/run/dpdk/spdk_pid1030240 00:35:13.983 Removing: /var/run/dpdk/spdk_pid1041513 00:35:13.983 Removing: /var/run/dpdk/spdk_pid1048857 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1054262 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1055023 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1066600 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1067042 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1072604 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1079991 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1082979 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1096240 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1108341 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1110683 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1111826 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1133641 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1138698 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1144482 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1146509 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1148794 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1148901 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1149234 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1149491 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1150052 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1152358 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1153445 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1154036 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1161620 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1168915 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1174681 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1221049 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1225908 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1233858 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1235374 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1237138 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1242598 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1248024 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1257922 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1257944 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1263929 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1264262 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1264603 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1264948 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1265003 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1266332 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1268351 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1270377 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1272395 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1274311 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1276249 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1283893 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1284725 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1285596 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1286491 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1293051 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1296398 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1303459 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1311317 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1318671 00:35:14.244 Removing: /var/run/dpdk/spdk_pid1319418 00:35:14.245 Removing: /var/run/dpdk/spdk_pid1320202 00:35:14.245 Removing: /var/run/dpdk/spdk_pid1320896 00:35:14.245 Removing: /var/run/dpdk/spdk_pid1321895 00:35:14.245 Removing: /var/run/dpdk/spdk_pid1322654 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1323351 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1324041 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1329667 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1329850 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1337564 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1337924 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1340459 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1348311 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1348377 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1355028 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1357663 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1360358 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1361880 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1364115 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1365648 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1376578 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1377054 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1377659 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1380731 00:35:14.505 Removing: /var/run/dpdk/spdk_pid1381405 00:35:14.506 Removing: /var/run/dpdk/spdk_pid1381991 00:35:14.506 Removing: /var/run/dpdk/spdk_pid889508 00:35:14.506 Removing: /var/run/dpdk/spdk_pid891081 00:35:14.506 Removing: /var/run/dpdk/spdk_pid891859 00:35:14.506 Removing: /var/run/dpdk/spdk_pid892918 00:35:14.506 Removing: /var/run/dpdk/spdk_pid893478 00:35:14.506 Removing: /var/run/dpdk/spdk_pid893851 00:35:14.506 Removing: /var/run/dpdk/spdk_pid894235 00:35:14.506 Removing: /var/run/dpdk/spdk_pid894636 00:35:14.506 Removing: /var/run/dpdk/spdk_pid895031 00:35:14.506 Removing: /var/run/dpdk/spdk_pid895185 00:35:14.506 Removing: /var/run/dpdk/spdk_pid895420 00:35:14.506 Removing: /var/run/dpdk/spdk_pid895804 00:35:14.506 Removing: /var/run/dpdk/spdk_pid897204 00:35:14.506 Removing: /var/run/dpdk/spdk_pid900502 00:35:14.506 Removing: /var/run/dpdk/spdk_pid900858 00:35:14.506 Removing: /var/run/dpdk/spdk_pid901220 00:35:14.506 Removing: /var/run/dpdk/spdk_pid901452 00:35:14.506 Removing: /var/run/dpdk/spdk_pid901956 00:35:14.506 Removing: /var/run/dpdk/spdk_pid902021 00:35:14.506 Removing: /var/run/dpdk/spdk_pid902571 00:35:14.506 Removing: /var/run/dpdk/spdk_pid902757 00:35:14.506 Removing: /var/run/dpdk/spdk_pid903125 00:35:14.506 Removing: /var/run/dpdk/spdk_pid903140 00:35:14.506 Removing: /var/run/dpdk/spdk_pid903602 00:35:14.506 Removing: /var/run/dpdk/spdk_pid903845 00:35:14.506 Removing: /var/run/dpdk/spdk_pid904551 00:35:14.506 Removing: /var/run/dpdk/spdk_pid904769 00:35:14.506 Removing: /var/run/dpdk/spdk_pid905166 00:35:14.506 Removing: /var/run/dpdk/spdk_pid905532 00:35:14.506 Removing: /var/run/dpdk/spdk_pid905552 00:35:14.506 Removing: /var/run/dpdk/spdk_pid905616 00:35:14.506 Removing: /var/run/dpdk/spdk_pid905950 00:35:14.506 Removing: /var/run/dpdk/spdk_pid906299 00:35:14.506 Removing: /var/run/dpdk/spdk_pid906582 00:35:14.506 Removing: /var/run/dpdk/spdk_pid906768 00:35:14.506 Removing: /var/run/dpdk/spdk_pid907012 00:35:14.506 Removing: /var/run/dpdk/spdk_pid907361 00:35:14.506 Removing: /var/run/dpdk/spdk_pid907703 00:35:14.506 Removing: /var/run/dpdk/spdk_pid907969 00:35:14.506 Removing: /var/run/dpdk/spdk_pid908122 00:35:14.506 Removing: /var/run/dpdk/spdk_pid908425 00:35:14.506 Removing: /var/run/dpdk/spdk_pid908764 00:35:14.767 Removing: /var/run/dpdk/spdk_pid909115 00:35:14.767 Removing: /var/run/dpdk/spdk_pid909305 00:35:14.767 Removing: /var/run/dpdk/spdk_pid909510 00:35:14.767 Removing: /var/run/dpdk/spdk_pid909826 00:35:14.767 Removing: /var/run/dpdk/spdk_pid910177 00:35:14.767 Removing: /var/run/dpdk/spdk_pid910499 00:35:14.767 Removing: /var/run/dpdk/spdk_pid910689 00:35:14.767 Removing: /var/run/dpdk/spdk_pid910884 00:35:14.767 Removing: /var/run/dpdk/spdk_pid911233 00:35:14.767 Removing: /var/run/dpdk/spdk_pid911575 00:35:14.767 Removing: /var/run/dpdk/spdk_pid911856 00:35:14.767 Removing: /var/run/dpdk/spdk_pid911999 00:35:14.767 Removing: /var/run/dpdk/spdk_pid912295 00:35:14.767 Removing: /var/run/dpdk/spdk_pid912635 00:35:14.767 Removing: /var/run/dpdk/spdk_pid912987 00:35:14.767 Removing: /var/run/dpdk/spdk_pid913175 00:35:14.767 Removing: /var/run/dpdk/spdk_pid913378 00:35:14.767 Removing: /var/run/dpdk/spdk_pid913698 00:35:14.767 Removing: /var/run/dpdk/spdk_pid914048 00:35:14.767 Removing: /var/run/dpdk/spdk_pid914329 00:35:14.767 Removing: /var/run/dpdk/spdk_pid914523 00:35:14.767 Removing: /var/run/dpdk/spdk_pid914762 00:35:14.767 Removing: /var/run/dpdk/spdk_pid915114 00:35:14.767 Removing: /var/run/dpdk/spdk_pid915455 00:35:14.767 Removing: /var/run/dpdk/spdk_pid915723 00:35:14.767 Removing: /var/run/dpdk/spdk_pid915868 00:35:14.767 Removing: /var/run/dpdk/spdk_pid916184 00:35:14.767 Removing: /var/run/dpdk/spdk_pid916520 00:35:14.767 Removing: /var/run/dpdk/spdk_pid916870 00:35:14.767 Removing: /var/run/dpdk/spdk_pid916942 00:35:14.767 Removing: /var/run/dpdk/spdk_pid917346 00:35:14.767 Removing: /var/run/dpdk/spdk_pid922201 00:35:14.767 Clean 00:35:14.767 killing process with pid 826877 00:35:24.771 killing process with pid 826874 00:35:24.771 killing process with pid 826876 00:35:24.771 killing process with pid 826875 00:35:24.771 23:02:08 -- common/autotest_common.sh@1436 -- # return 0 00:35:24.771 23:02:08 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:35:24.771 23:02:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:24.771 23:02:08 -- common/autotest_common.sh@10 -- # set +x 00:35:24.771 23:02:08 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:35:24.771 23:02:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:24.771 23:02:08 -- common/autotest_common.sh@10 -- # set +x 00:35:24.771 23:02:09 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:24.771 23:02:09 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:24.771 23:02:09 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:24.771 23:02:09 -- spdk/autotest.sh@394 -- # hash lcov 00:35:24.771 23:02:09 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:35:24.771 23:02:09 -- spdk/autotest.sh@396 -- # hostname 00:35:24.771 23:02:09 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-CYP-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:24.771 geninfo: WARNING: invalid characters removed from testname! 00:35:46.734 23:02:31 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:49.276 23:02:33 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:51.184 23:02:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:52.647 23:02:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:54.029 23:02:38 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:55.411 23:02:40 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:57.320 23:02:42 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:57.581 23:02:42 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:57.581 23:02:42 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:57.581 23:02:42 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:57.581 23:02:42 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:57.582 23:02:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.582 23:02:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.582 23:02:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.582 23:02:42 -- paths/export.sh@5 -- $ export PATH 00:35:57.582 23:02:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.582 23:02:42 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:35:57.582 23:02:42 -- common/autobuild_common.sh@435 -- $ date +%s 00:35:57.582 23:02:42 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713214962.XXXXXX 00:35:57.582 23:02:42 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713214962.Yhlbq3 00:35:57.582 23:02:42 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:35:57.582 23:02:42 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:35:57.582 23:02:42 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:35:57.582 23:02:42 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:35:57.582 23:02:42 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:35:57.582 23:02:42 -- common/autobuild_common.sh@451 -- $ get_config_params 00:35:57.582 23:02:42 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:35:57.582 23:02:42 -- common/autotest_common.sh@10 -- $ set +x 00:35:57.582 23:02:42 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:35:57.582 23:02:42 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:35:57.582 23:02:42 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:57.582 23:02:42 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:35:57.582 23:02:42 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:35:57.582 23:02:42 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:35:57.582 23:02:42 -- spdk/autopackage.sh@19 -- $ timing_finish 00:35:57.582 23:02:42 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:57.582 23:02:42 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:35:57.582 23:02:42 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:57.582 23:02:42 -- spdk/autopackage.sh@20 -- $ exit 0 00:35:57.582 + [[ -n 784596 ]] 00:35:57.582 + sudo kill 784596 00:35:57.593 [Pipeline] } 00:35:57.611 [Pipeline] // stage 00:35:57.615 [Pipeline] } 00:35:57.631 [Pipeline] // timeout 00:35:57.636 [Pipeline] } 00:35:57.651 [Pipeline] // catchError 00:35:57.655 [Pipeline] } 00:35:57.672 [Pipeline] // wrap 00:35:57.677 [Pipeline] } 00:35:57.690 [Pipeline] // catchError 00:35:57.698 [Pipeline] stage 00:35:57.699 [Pipeline] { (Epilogue) 00:35:57.711 [Pipeline] catchError 00:35:57.713 [Pipeline] { 00:35:57.725 [Pipeline] echo 00:35:57.727 Cleanup processes 00:35:57.733 [Pipeline] sh 00:35:58.025 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:58.025 1398822 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:58.039 [Pipeline] sh 00:35:58.325 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:58.325 ++ grep -v 'sudo pgrep' 00:35:58.325 ++ awk '{print $1}' 00:35:58.325 + sudo kill -9 00:35:58.325 + true 00:35:58.338 [Pipeline] sh 00:35:58.623 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:10.862 [Pipeline] sh 00:36:11.152 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:11.152 Artifacts sizes are good 00:36:11.171 [Pipeline] archiveArtifacts 00:36:11.183 Archiving artifacts 00:36:11.427 [Pipeline] sh 00:36:11.713 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:11.732 [Pipeline] cleanWs 00:36:11.743 [WS-CLEANUP] Deleting project workspace... 00:36:11.743 [WS-CLEANUP] Deferred wipeout is used... 00:36:11.750 [WS-CLEANUP] done 00:36:11.752 [Pipeline] } 00:36:11.772 [Pipeline] // catchError 00:36:11.788 [Pipeline] sh 00:36:12.075 + logger -p user.info -t JENKINS-CI 00:36:12.087 [Pipeline] } 00:36:12.105 [Pipeline] // stage 00:36:12.113 [Pipeline] } 00:36:12.132 [Pipeline] // node 00:36:12.141 [Pipeline] End of Pipeline 00:36:12.185 Finished: SUCCESS